Third Party Maturity Calculator

Score third parties using controls, evidence, and monitoring. See maturity level, gaps, and next steps. Download reports instantly and track progress securely over time.

Assessment inputs
Rate each domain and adjust weights if needed.
Use the legal vendor or service name.
Use the evidence collection completion date.
Helpful for audit trails and follow-ups.
Operational impact if the service fails.
Highest classification processed or stored.
Privilege level into systems or environments.
Delivery style affects oversight and tooling.
Cross-border rules increase assurance needs.
Fourth-party controls and flow-down clauses.
10%
All weights auto-normalize to 100.
Accountability, policies, and executive sponsorship.
10%
All weights auto-normalize to 100.
Pre-contract screening, questionnaires, and evidence review.
8%
All weights auto-normalize to 100.
Security clauses, SLAs, audit rights, and breach terms.
8%
All weights auto-normalize to 100.
Least privilege, MFA, joiner/mover/leaver discipline.
10%
All weights auto-normalize to 100.
Encryption, key management, retention, and secure transfer.
8%
All weights auto-normalize to 100.
Release controls, approvals, rollback, and separation of duties.
8%
All weights auto-normalize to 100.
Scanning cadence, remediation SLAs, and patch governance.
8%
All weights auto-normalize to 100.
Central logging, alerting, anomaly detection, and review.
8%
All weights auto-normalize to 100.
Playbooks, contacts, testing, and contractual notification windows.
7%
All weights auto-normalize to 100.
BCP/DR testing, RTO/RPO alignment, and resilience.
7%
All weights auto-normalize to 100.
SOC/ISO artifacts, controls testing, and transparency.
8%
All weights auto-normalize to 100.
Fourth-party oversight and flow-down requirements.
Notes are included in the PDF export.
Reset View sample PDF
Example data table
Use this sample to understand typical maturity and risk patterns.
Third party Service Risk tier Overall score Maturity level Target Status
Acme Payments Card processing Critical 81.2 Managed 85 Needs improvement
Northwind Cloud SaaS analytics High 74.6 Managed 75 Needs improvement
Contoso Support Helpdesk outsourcing Moderate 67.8 Defined 65 Aligned
Fabrikam Marketing Consulting Low 58.0 Defined 55 Aligned
Tip: Compare score vs target, not score alone.
Formula used
1) Weighted maturity score (0–100)
Each domain is rated from 0 to 5. Weights are normalized to total 100%.
NormalizedWeightᵢ = (Weightᵢ ÷ ΣWeight) × 100
DomainScoreᵢ = (Ratingᵢ ÷ 5) × NormalizedWeightᵢ
OverallMaturity = Σ DomainScoreᵢ
2) Inherent risk and target score
Risk is estimated from criticality, data sensitivity, access, service model, geography, and subcontractor reliance.
InherentRisk% = (Σ RiskPoints ÷ Σ MaxRiskPoints) × 100
RiskTier = Low <25, Moderate <50, High <75, Critical ≥75
TargetScore = 55 / 65 / 75 / 85 (by tier)
3) Gap and alignment
Gap = max(0, TargetScore − OverallMaturity)
Status = Aligned if OverallMaturity ≥ TargetScore
This keeps expectations higher for higher-risk vendors.
How to use this calculator
  • Collect evidence: policies, SOC reports, pen test summaries, and IR contacts.
  • Pick risk factors to reflect the vendor’s real exposure.
  • Rate each domain from 0 to 5 using the descriptions.
  • Optionally enable custom weights for your control priorities.
  • Submit to view score, tiered target, gap, and priorities.
  • Download CSV or PDF to share with stakeholders.
For consistent results, reuse the same rubric across vendors.

Third-party maturity insights

Vendor risk drivers and scoring inputs

The calculator converts six exposure drivers into an inherent risk percentage: criticality, data sensitivity, access type, service model, geography, and subcontractor reliance. Each selection adds risk points and is normalized against the maximum possible total. A vendor marked “Critical,” handling “Restricted” data, and using “Privileged” access will naturally produce a higher risk percentage than a low-impact supplier with no access.

Domain ratings mapped to a 100-point maturity score

You rate 12 control domains on a 0–5 scale, then the tool normalizes weights to 100% and sums weighted contributions. If a domain has 10% weight and a rating of 4/5, it contributes 8 points. This structure makes trade-offs visible: improving a high-weight domain often raises the overall score faster than small gains across low-weight domains.

Evidence benchmarks that reduce uncertainty

Consistent scoring depends on evidence quality. Common artifacts include SOC/ISO reports, policy excerpts, penetration test summaries, vulnerability scan results, patch SLAs, incident notification contacts, and DR test reports. Many programs set remediation expectations such as 15–30 days for critical findings and 60–90 days for medium findings, with documented exceptions. When evidence is partial, capture compensating controls in Notes and schedule a re-check.

Interpreting targets and gaps for decisions

The output is most useful when you compare overall score to the risk-based target. Typical targets are 55 (Low), 65 (Moderate), 75 (High), and 85 (Critical). A score of 72 can be acceptable for a moderate-risk supplier but insufficient for a high-risk supplier. Use “Top improvement priorities” to steer contract addenda, milestone plans, and executive risk acceptance.

Operational cadence for continuous improvement

Run the assessment at onboarding, after major scope changes, and at least annually for high-risk vendors; moderate-risk vendors are often reviewed every 18–24 months. Track trends by exporting CSV or PDF and keeping the same rubric. Aim for measurable movement: shorten patch timelines, increase MFA coverage, expand log review, and test incident drills every 6–12 months. Re-score after evidence updates to confirm improvements. For critical suppliers, align BCP metrics by confirming tested RTO/RPO, backup encryption, and restore validation, then record results in the notes for external auditors.

FAQs

What does a 0 rating mean?

A 0 indicates the control is not implemented or there is no usable evidence. Use Notes to capture planned remediation and re-score once proof is available.

How should I set custom weights?

Weights should reflect what matters most for the vendor’s service. Increase weights for domains that directly protect the vendor’s highest exposures, then keep the same weighting scheme for comparable suppliers.

Why does risk tier change the target score?

Higher exposure warrants stronger assurance. The target increases as inherent risk rises, helping you avoid accepting “average” maturity for suppliers with privileged access or sensitive data.

Can I assess subcontractors and fourth parties?

Yes. Treat them as third parties and apply the same rubric. For shared services, prioritize subcontractor oversight, contractual flow-down clauses, and evidence that controls extend across the chain.

How do CSV and PDF exports work?

After you calculate, the tool stores the latest result in session and exports that snapshot. Use the export links in the Results card to download a report for governance records.

How often should I reassess a vendor?

Reassess at onboarding, after material changes, and on a cadence aligned to risk. High-risk vendors are commonly reviewed annually, while moderate vendors can be reviewed every 18–24 months.

Related Calculators

Vendor Risk ScoreThird Party RiskSupplier Security RiskVendor Due DiligenceThird Party ExposureVendor Breach ImpactVendor Risk RatingSupplier Risk IndexVendor Compliance ScoreThird Party Vulnerability

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.