Assess each supplier against your required security baseline. Adjust scoring for data sensitivity and access. Turn findings into an actionable, prioritized improvement roadmap today.
Enter your supplier’s assessment counts and risk context. Counts are auto-aligned to the total if they do not match.
Sample vendor snapshots to illustrate inputs and outcomes. Replace with your real assessment results.
| Supplier | Total | Compliant | Partial | Noncompliant | Critical Failed | Target | Typical Outcome |
|---|---|---|---|---|---|---|---|
| Acme Hosting | 120 | 98 | 16 | 6 | 1 | 95% | Small gap, medium follow-up |
| Bright Payroll | 90 | 60 | 18 | 12 | 4 | 92% | High gap, urgent remediation plan |
| Core Analytics | 150 | 120 | 20 | 10 | 0 | 90% | Low gap, monitor evidence cadence |
Third‑party access expands your attack surface beyond your perimeter. A structured gap score helps compare suppliers consistently. Many programs split controls into governance, technical safeguards, and incident readiness. Track two numbers: compliance percentage and critical-control pass rate. Define tiers such as Low (≥90%), Moderate (75–89%), and High (<75%). Prioritize vendors handling production data, privileged access, or code changes, and schedule deeper reviews first.
This calculator converts your checklist into a weighted score so critical items matter more. Assign higher weights to controls that reduce breach likelihood, such as MFA, patch SLAs, secure backups, and log retention. Treat “not applicable” as excluded, not failed, and document the reason. Weighted score = earned points ÷ available points × 100. Add a “critical fail” flag when any must-have control scores zero. Keep a short weight rationale to support audit sampling.
Numbers are useful when they translate into actions and budgets. Estimate remediation hours per failed control and multiply by an agreed hourly rate. Add contingency for complex changes like segmentation, endpoint rollout, or SIEM onboarding. Separate one-time project costs from recurring platform costs for approvals. Map each failed control to an owner, duration, and target date. The best outputs are projected spend, total hours, days-to-close, and a ranked list of the largest risk reductions.
Evidence quality is where assessments often stall. Require artifacts that match the intent: policy plus enforcement, not policy alone. Use an evidence rating, for example 0–5, then scale earned points by evidence/5 to reward stronger proof. Ask for timestamps, configuration exports, vulnerability reports, and ticket history. Spot-check samples like access logs or incident postmortems. This reduces “paper compliance” and improves scoring integrity.
Use results to drive contract terms and continuous monitoring, not just a one-time questionnaire. Set thresholds, such as no critical gaps before onboarding and a 90‑day window for medium gaps. Tie gaps to clauses: right-to-audit, breach notice timelines, and minimum safeguards. Re-score quarterly for high-risk vendors and semiannually for others. Feed outcomes into renewals, pricing, and exception approvals to keep risk measurable.
It summarizes how far a supplier’s security controls are from your requirements. The score combines control weights, pass/fail results, and optional evidence strength into one percentage for easy comparison.
Increase weights for controls that protect sensitive data and reduce incident impact, such as identity, patching, backups, and monitoring. Keep weights simple, document the rationale, and apply the same scheme to all suppliers.
Exclude them from the available total rather than marking them as failures. Record why they are not applicable, and verify the scope does not change later during onboarding or contract renewal.
A critical fail indicates at least one must-have control is missing or unproven. Even if the overall score looks acceptable, treat critical fails as blockers until remediation is completed or a formal exception is approved.
They are planning estimates based on your entered hours, rates, and contingency. Improve accuracy by using historical remediation data, validating dependencies with the supplier, and separating one-time work from recurring service fees.
High-risk suppliers are commonly reassessed quarterly, while lower-risk suppliers can be reviewed semiannually or annually. Reassess immediately after major incidents, significant scope changes, or when key controls move from partial to implemented.
Practical note: If you use multiple assessor teams, standardize definitions for “partial” and “compensating” and require evidence dates to reduce scoring drift.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.