Supplier Compliance Gap Calculator

Assess each supplier against your required security baseline. Adjust scoring for data sensitivity and access. Turn findings into an actionable, prioritized improvement roadmap today.

Calculator

Enter your supplier’s assessment counts and risk context. Counts are auto-aligned to the total if they do not match.

Recommended: 90–99% for regulated or critical vendors.
Partial counts as 50% in scoring.
Auto-recalculated if totals mismatch.
Each adds 25% credit, not full compliance.
Used to amplify the gap based on weight.
Higher values raise the timeliness multiplier.
Example: USD, EUR, GBP, PKR.
Reset

Example data table

Sample vendor snapshots to illustrate inputs and outcomes. Replace with your real assessment results.

Supplier Total Compliant Partial Noncompliant Critical Failed Target Typical Outcome
Acme Hosting 120 98 16 6 1 95% Small gap, medium follow-up
Bright Payroll 90 60 18 12 4 92% High gap, urgent remediation plan
Core Analytics 150 120 20 10 0 90% Low gap, monitor evidence cadence
Tip: Use “Days since evidence was updated” to highlight stale attestations and aging documents.

Formula used

1) Base compliance
Base = ((C × 1.0) + (P × 0.5) + (K × 0.25)) ÷ T × 100
  • T = total required controls
  • C = compliant controls
  • P = partially compliant controls
  • K = compensating controls (optional)
2) Gap and risk adjustment
Gap = max(0, Target − Base)
RiskAdjustedGap = min(100, Gap × RiskMultiplier)
RiskMultiplier increases with failed critical controls, stale evidence, exposure (data/access/criticality), and high-risk findings.
3) Effort and cost
Hours = T × (RiskAdjustedGap ÷ 100) × HoursPerControl
Cost = Hours × HourlyRate
This model is designed for prioritization. Keep your control mapping consistent across suppliers for fair comparisons.

How to use this calculator

  1. Pick the framework that matches your vendor assessment checklist.
  2. Enter your target compliance percentage for the supplier tier.
  3. Fill control counts from your most recent review evidence.
  4. Add critical failures and high-risk findings from audit notes.
  5. Set data sensitivity, access level, and business criticality.
  6. Adjust weights to match your internal risk appetite.
  7. Click Calculate and export results for tracking.

Supplier exposure and control drift

Third‑party access expands your attack surface beyond your perimeter. A structured gap score helps compare suppliers consistently. Many programs split controls into governance, technical safeguards, and incident readiness. Track two numbers: compliance percentage and critical-control pass rate. Define tiers such as Low (≥90%), Moderate (75–89%), and High (<75%). Prioritize vendors handling production data, privileged access, or code changes, and schedule deeper reviews first.

Gap scoring that stays audit-friendly

This calculator converts your checklist into a weighted score so critical items matter more. Assign higher weights to controls that reduce breach likelihood, such as MFA, patch SLAs, secure backups, and log retention. Treat “not applicable” as excluded, not failed, and document the reason. Weighted score = earned points ÷ available points × 100. Add a “critical fail” flag when any must-have control scores zero. Keep a short weight rationale to support audit sampling.

Costing remediation with realistic effort

Numbers are useful when they translate into actions and budgets. Estimate remediation hours per failed control and multiply by an agreed hourly rate. Add contingency for complex changes like segmentation, endpoint rollout, or SIEM onboarding. Separate one-time project costs from recurring platform costs for approvals. Map each failed control to an owner, duration, and target date. The best outputs are projected spend, total hours, days-to-close, and a ranked list of the largest risk reductions.

Using evidence quality to reduce disputes

Evidence quality is where assessments often stall. Require artifacts that match the intent: policy plus enforcement, not policy alone. Use an evidence rating, for example 0–5, then scale earned points by evidence/5 to reward stronger proof. Ask for timestamps, configuration exports, vulnerability reports, and ticket history. Spot-check samples like access logs or incident postmortems. This reduces “paper compliance” and improves scoring integrity.

Operationalizing results with procurement

Use results to drive contract terms and continuous monitoring, not just a one-time questionnaire. Set thresholds, such as no critical gaps before onboarding and a 90‑day window for medium gaps. Tie gaps to clauses: right-to-audit, breach notice timelines, and minimum safeguards. Re-score quarterly for high-risk vendors and semiannually for others. Feed outcomes into renewals, pricing, and exception approvals to keep risk measurable.

FAQs

What is a supplier compliance gap score?

It summarizes how far a supplier’s security controls are from your requirements. The score combines control weights, pass/fail results, and optional evidence strength into one percentage for easy comparison.

How should I choose control weights?

Increase weights for controls that protect sensitive data and reduce incident impact, such as identity, patching, backups, and monitoring. Keep weights simple, document the rationale, and apply the same scheme to all suppliers.

How do I handle “not applicable” controls?

Exclude them from the available total rather than marking them as failures. Record why they are not applicable, and verify the scope does not change later during onboarding or contract renewal.

What does “critical fail” mean?

A critical fail indicates at least one must-have control is missing or unproven. Even if the overall score looks acceptable, treat critical fails as blockers until remediation is completed or a formal exception is approved.

How accurate are cost and timeline estimates?

They are planning estimates based on your entered hours, rates, and contingency. Improve accuracy by using historical remediation data, validating dependencies with the supplier, and separating one-time work from recurring service fees.

How often should suppliers be reassessed?

High-risk suppliers are commonly reassessed quarterly, while lower-risk suppliers can be reviewed semiannually or annually. Reassess immediately after major incidents, significant scope changes, or when key controls move from partial to implemented.

Practical note: If you use multiple assessor teams, standardize definitions for “partial” and “compensating” and require evidence dates to reduce scoring drift.

Related Calculators

Vendor Risk ScoreThird Party RiskSupplier Security RiskVendor Breach ImpactVendor Risk RatingSupplier Risk IndexThird Party VulnerabilitySupplier Cyber RiskVendor Trust ScoreThird Party Maturity

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.