| Third party | Service | Risk tier | Overall score | Maturity level | Target | Status |
|---|---|---|---|---|---|---|
| Acme Payments | Card processing | Critical | 81.2 | Managed | 85 | Needs improvement |
| Northwind Cloud | SaaS analytics | High | 74.6 | Managed | 75 | Needs improvement |
| Contoso Support | Helpdesk outsourcing | Moderate | 67.8 | Defined | 65 | Aligned |
| Fabrikam Marketing | Consulting | Low | 58.0 | Defined | 55 | Aligned |
DomainScoreᵢ = (Ratingᵢ ÷ 5) × NormalizedWeightᵢ
OverallMaturity = Σ DomainScoreᵢ
RiskTier = Low <25, Moderate <50, High <75, Critical ≥75
TargetScore = 55 / 65 / 75 / 85 (by tier)
Status = Aligned if OverallMaturity ≥ TargetScore
- Collect evidence: policies, SOC reports, pen test summaries, and IR contacts.
- Pick risk factors to reflect the vendor’s real exposure.
- Rate each domain from 0 to 5 using the descriptions.
- Optionally enable custom weights for your control priorities.
- Submit to view score, tiered target, gap, and priorities.
- Download CSV or PDF to share with stakeholders.
Third-party maturity insights
Vendor risk drivers and scoring inputs
The calculator converts six exposure drivers into an inherent risk percentage: criticality, data sensitivity, access type, service model, geography, and subcontractor reliance. Each selection adds risk points and is normalized against the maximum possible total. A vendor marked “Critical,” handling “Restricted” data, and using “Privileged” access will naturally produce a higher risk percentage than a low-impact supplier with no access.
Domain ratings mapped to a 100-point maturity score
You rate 12 control domains on a 0–5 scale, then the tool normalizes weights to 100% and sums weighted contributions. If a domain has 10% weight and a rating of 4/5, it contributes 8 points. This structure makes trade-offs visible: improving a high-weight domain often raises the overall score faster than small gains across low-weight domains.
Evidence benchmarks that reduce uncertainty
Consistent scoring depends on evidence quality. Common artifacts include SOC/ISO reports, policy excerpts, penetration test summaries, vulnerability scan results, patch SLAs, incident notification contacts, and DR test reports. Many programs set remediation expectations such as 15–30 days for critical findings and 60–90 days for medium findings, with documented exceptions. When evidence is partial, capture compensating controls in Notes and schedule a re-check.
Interpreting targets and gaps for decisions
The output is most useful when you compare overall score to the risk-based target. Typical targets are 55 (Low), 65 (Moderate), 75 (High), and 85 (Critical). A score of 72 can be acceptable for a moderate-risk supplier but insufficient for a high-risk supplier. Use “Top improvement priorities” to steer contract addenda, milestone plans, and executive risk acceptance.
Operational cadence for continuous improvement
Run the assessment at onboarding, after major scope changes, and at least annually for high-risk vendors; moderate-risk vendors are often reviewed every 18–24 months. Track trends by exporting CSV or PDF and keeping the same rubric. Aim for measurable movement: shorten patch timelines, increase MFA coverage, expand log review, and test incident drills every 6–12 months. Re-score after evidence updates to confirm improvements. For critical suppliers, align BCP metrics by confirming tested RTO/RPO, backup encryption, and restore validation, then record results in the notes for external auditors.
FAQs
What does a 0 rating mean?
A 0 indicates the control is not implemented or there is no usable evidence. Use Notes to capture planned remediation and re-score once proof is available.
How should I set custom weights?
Weights should reflect what matters most for the vendor’s service. Increase weights for domains that directly protect the vendor’s highest exposures, then keep the same weighting scheme for comparable suppliers.
Why does risk tier change the target score?
Higher exposure warrants stronger assurance. The target increases as inherent risk rises, helping you avoid accepting “average” maturity for suppliers with privileged access or sensitive data.
Can I assess subcontractors and fourth parties?
Yes. Treat them as third parties and apply the same rubric. For shared services, prioritize subcontractor oversight, contractual flow-down clauses, and evidence that controls extend across the chain.
How do CSV and PDF exports work?
After you calculate, the tool stores the latest result in session and exports that snapshot. Use the export links in the Results card to download a report for governance records.
How often should I reassess a vendor?
Reassess at onboarding, after material changes, and on a cadence aligned to risk. High-risk vendors are commonly reviewed annually, while moderate vendors can be reviewed every 18–24 months.