Example Data Table
Illustrative scoring across three vendors using default weights. Use it to validate your expectations before running assessments.
| Vendor | Avg Rating | Estimated Score | Category |
|---|---|---|---|
| SecureMail Co. | 2.1 | 27.50 | Low |
| CloudOps Partner | 3.2 | 55.00 | Medium |
| Legacy Integrator | 4.1 | 77.50 | High |
Formula Used
Each factor is rated from 1 to 5.
Ratings are mapped to risk points from 0 to 100:
Scores range from 0 to 100, where higher means higher vendor risk. Categories: Low < 34, Medium 34–66, High ≥ 67.
How to Use This Calculator
- Enter the vendor name and assessment date.
- Select a rating for each risk factor using evidence.
- Adjust weights to match your risk appetite and scope.
- Click Calculate to generate the score and category.
- Download CSV or PDF for reviews, audits, and renewals.
Vendor risk scoring strengthens third‑party governance
Third parties remain a common pathway for incidents, because access, data sharing, and outsourced operations expand attack surface. A structured score converts assessment inputs into a consistent 0–100 signal, so procurement can compare vendors across business units across the enterprise. Maintaining an inventory with owner, service, and criticality makes scoring repeatable. When every vendor has a score, teams can focus reviews on the highest‑exposure suppliers first.
Weighting aligns the score with business impact
Not every factor matters equally. For example, a payroll processor handling regulated identifiers may deserve a 15% data‑sensitivity weight, while a design agency with limited access might be 5%. This calculator normalizes weights to prevent math errors when totals differ from 100%, ensuring the final score remains comparable across assessments. If weights total 120%, each weight is divided by 120, preserving relative emphasis.
Interpreting Low, Medium, and High ranges
Scores below 34 suggest controls and exposure are generally manageable with standard onboarding checks and annual re‑validation. Scores from 34 to 66 indicate meaningful gaps or elevated exposure that justify enhanced due diligence, remediation milestones, and tighter contractual terms. Scores at or above 67 warrant deep assessment, executive review, and continuous monitoring until risk drops. Track whether remediation reduces the score in the next cycle.
Evidence sources that strengthen ratings
Ratings are strongest when supported by evidence, not promises. Useful sources include SOC 2 or ISO 27001 reports, penetration test summaries, vulnerability scanning cadence, incident response playbooks, and disaster recovery test results. Operational data matters too: patching timelines, mean time to detect, and log retention periods can differentiate similar vendors. Confirm MFA coverage, encryption in transit and at rest, and least‑privilege access reviews.
Using results in contracts and continuous monitoring
A score becomes actionable when tied to requirements. Medium and High vendors can be assigned remediation deadlines, security SLAs, notification timelines, and audit rights. Define measurable targets such as critical patch timelines, log retention days, and quarterly tabletop exercises. Re‑score quarterly for higher tiers, and trigger re‑assessment after scope changes, new integrations, or incidents. Over a year, trend charts of scores can show whether the program is reducing systemic third‑party risk.
FAQs
What does a higher score mean?
A higher score indicates higher estimated third‑party cyber risk based on your ratings and weights. It reflects greater exposure, weaker controls, or both, and typically calls for stronger due diligence and monitoring.
Why do weights normalize automatically?
Teams often adjust weights by preference, and totals may not equal 100. Normalization scales each weight proportionally so the score remains on a consistent 0–100 range without manual recalculation.
How often should we re‑score vendors?
Re‑score at least annually for low‑risk vendors. For medium or high risk, re‑score quarterly or after major changes, such as new integrations, expanded data scope, or an incident.
Can we add more factors?
Yes. Add new rating and weight fields, include them in the factor arrays, and update the breakdown table. Keep ratings consistent and revise category thresholds if your program’s appetite differs.
How should we choose ratings?
Use evidence-based inputs: audit reports, security policies, test summaries, incident records, and operational metrics. When unsure, rate conservatively and document assumptions so stakeholders can review and adjust later.
Is this score a replacement for audits?
No. It is a prioritization and communication tool. Use it to decide which vendors need deeper technical assessment, contract controls, or continuous monitoring, then validate with appropriate audits and testing.