| Supplier | Access | Data | Maturity | Final Score | Band |
|---|---|---|---|---|---|
| NorthBridge Cloud Ops | 4 | 4 | 3 | 72 | High |
| Atlas Payroll Services | 2 | 5 | 4 | 58 | Medium |
| BlueFin Office Supplies | 0 | 1 | 3 | 18 | Low |
Then two normalized scores are computed:
- Driver = Σ(wᵢ × nᵢ) over risk drivers
- Mitigation = Σ(wⱼ × nⱼ) over mitigation factors
- Define a consistent 0–5/1–5 scoring rubric for your organization.
- Rate supplier exposure drivers (access, sensitivity, incidents, gaps, and surface area).
- Rate mitigation factors (maturity, controls, visibility, authentication, readiness).
- Optionally adjust weights to reflect business priorities, then calculate.
- Use the band and recommendations to set onboarding gates and timelines.
- Export CSV/PDF to attach results to vendor review records.
Vendor access scope shapes real attack paths
Access ratings map directly to privilege and blast radius. A score of 0 means no endpoints, while 5 indicates privileged paths. When access rises from 2 to 4, likelihood typically increases faster than impact because compromise becomes easier and lateral movement becomes practical. Review remote tooling, API keys, and shared admin roles before finalizing this value.
Data sensitivity and criticality drive impact scoring
Impact is derived from criticality and sensitivity using a blended scale. Values near 5 indicate regulated, confidential, or mission‑critical exposure. This approach keeps impact stable across suppliers, so teams can compare payroll, hosting, and support vendors using the same decision yardstick. If either factor is 5, treat encryption, retention limits, and deletion evidence as mandatory.
History, compliance, and geography influence probability
Incident history and compliance gaps raise the driver score even when other factors look safe. Repeated events, unaligned audits, or unstable regions justify higher probability assumptions. For many programs, a one‑step increase in these inputs can add 3–6 points to the final score, especially when access is elevated. Track this change over time to show measurable improvement after remediation.
Mitigation factors reduce residual risk measurably
Security maturity, contract controls, monitoring, authentication, and response readiness are treated as risk reducers. Strong MFA and centralized logging often provide the largest benefit. Improving mitigations from 2 to 4 can lower the weighted score by roughly 10–20 points, depending on weights. The default model emphasizes exposure (65%) while still rewarding strong mitigations (35%).
Use bands to set gates, timelines, and evidence needs
Scores are grouped as Low, Medium, High, and Critical to support operational actions. Medium suppliers may onboard with remediation dates, while High suppliers require tighter segmentation and continuous monitoring. Critical suppliers should not receive new access until validated fixes are delivered. Use exports to attach results to procurement records and to compare vendors with the same scoring rubric. When the final score exceeds 60, require a dated corrective plan, named owners, and quarterly reviews. For scores above 80, consider alternate suppliers, reduce data sharing, and require independent validation such as third‑party testing or audit reports before approving renewals or expanded access.
1) What does the final score represent?
It is a 0–100 blended score combining the weighted model and the likelihood×impact matrix. Use it to rank suppliers consistently and decide whether onboarding needs extra controls, remediation deadlines, or executive approval.
2) Why are mitigation factors inverted in the model?
Mitigations are protective. Higher maturity, monitoring, stronger authentication, and tested response reduce residual risk, so the calculator subtracts mitigation from exposure rather than adding it.
3) Can I change the weighting to match our program?
Yes. Expand Advanced weights and adjust driver and mitigation weights. Each group is normalized automatically, so you can emphasize access, data sensitivity, or contract controls without manual recalculation.
4) How should we score access level for SaaS vendors?
Base it on integration depth and privileges: read-only portals score low, API write access scores higher, and any admin, agent, or directory sync privileges should be near the top of the scale.
5) When should we require evidence like audits or pen tests?
Use bands as triggers. Medium often needs basic evidence. High should include dated remediation plans and stronger oversight. Critical should require independent validation and may justify delaying access until fixes are confirmed.
6) Does this replace third‑party risk assessments?
No. It supports prioritization and documentation. Combine the score with questionnaires, security evidence, legal review, and technical testing for suppliers that handle sensitive data or receive privileged access.