| Vendor | Criticality | Data | Access | Incidents (C/P) | Vulns (C/P) | Patch days (C/P) | Findings (C/P) | SLA (C/P) |
|---|---|---|---|---|---|---|---|---|
| Acme Payments | 5 | 5 | 4 | 4 / 2 | 6 / 10 | 120 / 160 | 2 / 3 | 1 / 2 |
| Northwind CRM | 3 | 3 | 2 | 1 / 1 | 3 / 3 | 45 / 60 | 1 / 1 | 0 / 1 |
| BlueCloud Hosting | 4 | 4 | 3 | 5 / 3 | 12 / 9 | 210 / 150 | 3 / 2 | 2 / 1 |
-
Normalize each metric to a 0–100 risk contribution.Examples: incidents risk = min(incidents / incidents_cap × 100, 100). Control gap = 100 − control effectiveness.
-
Compute composite score per period.RiskScoret = Σ (Weighti × MetricRiski,t) / 100
-
Compute trend.If using periods: Trend% = (Current − Prior) / Prior × 100. If using series slope: fit a line to monthly values; slope > 0 indicates rising risk.
-
Assign tier by current score.Low < 40, Moderate 40–54.99, High 55–69.99, Critical ≥ 70.
Continuous monitoring signals
Vendor programs often rely on annual questionnaires, which miss rapid changes in exposure. This calculator converts operational indicators—incidents, critical vulnerabilities, patch delay, compliance findings, and SLA breaches—into normalized 0–100 risk points. By keeping caps consistent across vendors, teams can compare suppliers fairly, even when volumes differ across business units.
Weighted drivers aligned to policy
Not every metric matters equally. Highly regulated environments typically emphasize data sensitivity and privileged access, while high-availability services may prioritize SLA breaches and patch latency. The weight panel is auto-normalized to 100, so changing one driver automatically rebalances the model without breaking totals. This supports governance reviews and transparent justification during audits.
In practice, teams can start with default weights and caps, then run a calibration exercise. Take five representative vendors and compare the resulting tiering with analyst judgement and known outcomes from the last year. If the model overreacts to occasional incidents, reduce the incident weight or increase the incidents cap. If exposure from privileged access is understated, raise the access level and data sensitivity weights. Document each adjustment so stakeholders can trace why the scoring changed. This calibration step makes the trend output defensible and repeatable.
Trend methods for different data maturity
When you have only two periods, the calculator computes percent change from prior to current, highlighting meaningful shifts. If you also track monthly composite scores, the slope option estimates direction using a simple regression line, reducing noise from one-time events. Both methods help detect slow drift before it becomes a material incident.
Tiering and prioritization workflow
Scores map to tiers: Low, Moderate, High, and Critical. Use tier plus direction to rank work: a High vendor that is Rising may require evidence refresh, ticket audits, and remediation commitments, while a Critical vendor that is Improving might remain monitored with shorter reporting intervals. Portfolio averages summarize overall third-party exposure for leadership.
Export-ready reporting and evidence packs
CSV and PDF exports provide a consistent record for risk committees and procurement. Attach results to supplier files with period definitions, caps used, and weight rationale. Over time, compare exported snapshots to show whether remediation actions reduced trend pressure. This improves accountability and supports contract language tied to measurable security outcomes. Review results quarterly and after major vendor changes for consistent governance decisions across enterprise.
FAQs
1) What does a 0–100 score represent?
It is a weighted composite of normalized risk signals. A higher score means more adverse conditions or weaker controls relative to the caps and weights you set.
2) How should I choose normalization caps?
Use realistic upper bounds from your vendor population, such as the 90th–95th percentile. Caps should stay stable for a reporting cycle to keep comparisons meaningful.
3) When should I use the slope trend method?
Use it when you have at least three monthly composite values per vendor. The slope smooths short spikes and is helpful for detecting gradual deterioration.
4) Why is “control gap” used instead of control effectiveness?
Risk increases when controls are less effective. Converting effectiveness to a gap (100 minus effectiveness) aligns the direction of all metrics so higher values always mean higher risk.
5) Can I compare vendors across different business units?
Yes, if you keep definitions consistent: same caps, same time windows, and comparable data sources. If units track metrics differently, separate dashboards are safer.
6) Does this replace a full vendor risk assessment?
No. It complements assessments by quantifying operational trend and helping you decide where to request evidence, validate remediation, or escalate contractual controls.
- Rising = meaningful worsening from prior or a positive slope.
- Improving = meaningful reduction or negative slope.
- Stable = change within a small band.