Vendor Risk Rating Calculator

Quantify third‑party exposure with a consistent scoring model. Adjust weights to match your critical systems. See the rating instantly and share it with teams.

Calculator

Lower confidence slightly increases final risk.
1: public data · 5: regulated or highly sensitive
1–5
1: no access · 5: privileged or admin access
1–5
1: isolated · 5: persistent integrations/VPN
1–5
1: ad-hoc · 5: audited, mature program
1–5
1: none/transparent · 5: repeated/unclear
1–5
1: none · 5: major regulatory scope
1–5
1: optional · 5: core operations dependency
1–5
1: none · 5: extensive fourth parties
1–5
1: strong · 5: uncertain/volatile
1–5
Tip: Keep weights aligned to your threat model and data scope. Total weights must equal 100.

Example data table

Vendor Base Confidence Final Tier Notes
Acme Cloud Services 61.2 0.80 66.7 Medium Privileged access with integrations; remediation pending.
Northwind Payroll 72.4 0.70 80.0 High Regulated data; limited evidence; fourth parties involved.
Contoso Design Studio 24.8 0.90 26.3 Low No production access; minimal data exchange.
Example values are illustrative and may not reflect real vendors.

Formula used

Each criterion uses a 1–5 score and a weight percentage. Scores are normalized to 0–100.
NormalizedScore = (Score / 5) × 100
BaseRisk = Σ (NormalizedScore × Weight% / 100)
ConfidencePenalty = (1 − Confidence) / 0.5 (clamped 0..1)
FinalRisk = BaseRisk × (1 + 0.15 × ConfidencePenalty)
FinalRisk is capped between 0 and 100. Risk tiers: Low < 34, Medium 34–66.9, High ≥ 67.

How to use

  1. Enter the vendor name and assessment date.
  2. Score each criterion from 1 (lower risk) to 5 (higher risk).
  3. Set weights so the total equals 100.
  4. Choose an evidence confidence level based on documentation quality.
  5. Press Submit to see results above the form.
  6. Download CSV or PDF to share your assessment.

Why vendor risk scoring matters

Third‑party services often hold data, credentials, or operational influence that expands your attack surface. A structured rating converts qualitative findings into a comparable 0–100 score. Using a 1–5 rubric reduces reviewer bias and supports repeatable decisions. Organizations that standardize vendor assessments typically shorten onboarding cycles and improve remediation tracking because expectations are explicit. When scores are stored over time, trends highlight vendors whose risk is increasing due to scope creep or deteriorating controls.

Choosing practical criteria

This calculator uses nine criteria that map to common third‑party risk drivers: sensitivity of handled data, level of access, connectivity, control maturity, incident history, compliance impact, business criticality, subcontractor reliance, and financial stability. Each criterion is intentionally broad so you can score consistently with limited information. For example, “connectivity” captures whether integrations are transient, API‑based, or persistent network links. “Incident history” considers both frequency and transparency of disclosures.

Weighting to reflect exposure

Weights force prioritization. If you process regulated records, allocate more weight to data sensitivity and compliance impact. If vendors receive privileged access, raise access level and control maturity weights. The calculator validates that weights total 100 so the score remains interpretable. Normalizing the 1–5 scores to 0–100 keeps units consistent and makes each contribution easy to explain to stakeholders.

Interpreting tiers and actions

Risk tiers translate scores into governance actions. Low risk often fits standard clauses and annual reviews. Medium risk should trigger time‑bound remediation, evidence requests, and more frequent monitoring. High risk warrants executive approval, stronger contractual controls, and technical safeguards such as least‑privilege, segmentation, and logging requirements. Confidence also matters: limited evidence increases the final score slightly to reflect uncertainty and encourage follow‑up.

Operationalizing continuous monitoring

A rating is most useful when paired with workflow. Store the inputs, evidence links, and compensating controls alongside the final score. Reassess after changes in data types, integrations, or subcontractors. Use the CSV export to load a register, and the PDF export for procurement packets. Track remediation due dates, and compare quarter‑over‑quarter scores to verify that promised improvements reduce measurable risk. Pair scores with security questionnaires, penetration summaries, and SLA metrics to keep assessments objective.

FAQs

What does the final risk score represent?

It is a weighted 0–100 rating based on normalized 1–5 scores across the selected criteria. Higher values indicate greater exposure, weaker controls, or higher uncertainty, and should drive stronger governance and monitoring actions.

How should I choose the weights?

Start with your threat model and data classification. Increase weights for criteria that create the most impact if compromised, such as regulated data, privileged access, persistent connectivity, or critical operational dependency.

Why does evidence confidence affect the score?

When evidence is limited, the calculator applies a small uplift to reflect uncertainty. This encourages follow‑up documentation, validation testing, or contractual commitments before granting broader access or expanding scope.

Can I compare vendors from different service types?

Yes, as long as you apply the same rubric and weight set. If service types differ significantly, maintain separate weight templates so comparisons remain fair and aligned to the exposure each vendor introduces.

How often should vendors be reassessed?

At least annually for low risk, quarterly for medium risk, and monthly or after major changes for high risk. Always reassess after scope, integration, data type, or subcontractor changes.

Which inputs typically raise risk the most?

High data sensitivity, privileged access, persistent integrations, low control maturity, and unclear incident history commonly drive scores upward. Business criticality and fourth‑party reliance also increase governance needs because failures can cascade across operations.

Scoring guidance

  • 1: minimal exposure, limited scope, strong evidence.
  • 3: moderate exposure, partial controls, mixed evidence.
  • 5: high exposure, weak controls, uncertain evidence.

Keep a consistent rubric across vendors. This improves comparability and helps prioritize remediation efforts.

Common weight patterns

Use these as starting points:
  • Regulated data: increase Data sensitivity and Compliance impact.
  • Privileged access: increase Access level and Security controls.
  • Critical operations: increase Business criticality and Connectivity.

Related Calculators

Vendor Risk ScoreThird Party RiskSupplier Security RiskVendor Breach ImpactSupplier Risk IndexThird Party VulnerabilitySupplier Cyber RiskVendor Trust ScoreThird Party MaturitySupplier Incident Impact

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.