Inputs
Example Data Table
| Vendor | Service | Adjusted Score | Tier | Suggested Action |
|---|---|---|---|---|
| NorthBridge Support | Customer ticketing | 28.40 | Guarded | Collect evidence and schedule quarterly review. |
| ArcPay | Payments | 61.25 | High | Limit access and validate remediation quickly. |
| HelioAnalytics | Data processing | 47.90 | Elevated | Require remediation plan within 60 days. |
| CloudVault | File storage | 83.10 | Critical | Pause onboarding until risk is reduced. |
| OnSitePrint | Office printing | 16.70 | Low | Maintain baseline controls and review annually. |
Formula Used
Each factor is scored from 1 (best) to 5 (worst). Scores are normalized to 0–100 using:
The base risk score is the weighted sum of all normalized factors:
If you enter a mitigation adjustment, the final score becomes:
How to Use This Calculator
- Enter the vendor, service, assessor, and review date.
- Score each factor using evidence and contracts.
- Use 1 for best-case and 5 for worst-case.
- Add mitigation only for verified, tested controls.
- Click Calculate Risk to see results above.
- Download CSV or PDF for audits and tracking.
Quantifying Supplier Exposure
Third parties extend your attack surface beyond owned controls. This calculator converts qualitative findings into a comparable 0-100 score using weighted factors such as data sensitivity, privileged access, and monitoring coverage. Each factor is scored 1-5, normalized to a 0-100 scale, then combined by weights that sum to 1.0. Standardized scoring helps teams rank vendors, compare renewals, and defend remediation budgets with measurable changes across quarters.
Interpreting Likelihood and Impact
Likelihood summarizes how easily weaknesses could be exploited, emphasizing exposure, vulnerability history, patch cadence, control maturity, monitoring, and response readiness. Impact summarizes potential business damage, emphasizing criticality, regulated data, access level, compliance exposure, privileged access, and subcontractor dependence. A vendor can score moderate overall yet show high impact; that profile demands stricter access controls. Use the two views to separate probable from catastrophic, and to prioritize compensating controls accordingly.
Mitigation Evidence and Adjustment
Mitigation is applied only when compensating controls are verified, tested, and contractually enforceable. Examples include segmented access, enforced MFA, continuous logging with alerting, incident notification SLAs, and independent testing reports. The adjustment is capped at 30% to avoid masking weak fundamentals. Apply smaller reductions for partial coverage, and reserve larger reductions for controls that directly reduce exploitability, privilege, and data exposure.
Operationalizing Vendor Governance
Use score bands to trigger actions: low scores align to annual attestations, guarded scores to quarterly evidence checks, elevated scores to remediation plans, high scores to access limitations, and critical scores to onboarding pause. Many organizations treat 60+ as urgent, and 80+ as stop-ship, aligning response timelines to contractual remediation windows for critical services and data. Tie actions to procurement gates, identity provisioning, network segmentation, and renewal decisions. Track owners, due dates, and verification steps, and record risk acceptances with expiry dates so exceptions do not persist silently.
Audit-Ready Reviews and Cadence
Exports support governance by capturing inputs, timestamps, and computed results in consistent formats for GRC systems and ticketing workflows. Reassess on material change events such as new integrations, increased data scope, incident disclosures, geographic expansion, or mergers. Trend scores over time to validate vendor improvements, measure program coverage, and reduce surprise exposure during audits and security reviews.
FAQs
How is the adjusted score calculated?
Each factor is normalized, multiplied by its weight, and summed to a base score. The mitigation percentage then reduces the base score to produce the adjusted score used for tiering.
What evidence should support factor ratings?
Use contracts, architecture diagrams, security questionnaires, test reports, incident history, and verified control outcomes. Favor auditable artifacts over statements, and document any assumptions in notes for future reviewers.
Why does mitigation have a 30% limit?
A cap prevents strong claims from hiding weak fundamentals. It encourages targeted compensating controls while keeping the score anchored to exposure, sensitivity, access level, and governance maturity.
How should I use likelihood and impact together?
High likelihood calls for rapid hardening and monitoring. High impact calls for stricter access, segmentation, and contractual protections. When both are high, prioritize remediation and consider pausing onboarding or limiting scope.
When should vendors be reassessed?
Reassess at least annually, and sooner after incidents, major product changes, new integrations, increased data scope, or changes in subcontractors. Use consistent scoring to track improvement and validate remediation.
Can I compare vendors across different services?
Yes, as long as factor ratings reflect the specific service and integration. Keep scoring guidelines consistent, and use exports to trend scores, support renewal decisions, and communicate risk to stakeholders.