Calculator Inputs
Example Data Table
| Control ID | Domain | Target State | Current State | Gap Level | Owner |
|---|---|---|---|---|---|
| AC-02 | Access Management | MFA enforced everywhere | Partial coverage on legacy VPN | High | IAM Team |
| CM-06 | Configuration | Standard secure baselines | Servers standardized, endpoints mixed | Medium | Endpoint Ops |
| DE-04 | Monitoring | Centralized alert correlation | Logs collected, correlation limited | Medium | SOC |
| IR-05 | Incident Response | Tested playbooks quarterly | Playbooks exist, tests overdue | High | IR Lead |
| CP-09 | Resilience | Verified recovery objectives | Backups present, restoration unverified | High | Infrastructure |
| AT-02 | Awareness | Role-based training complete | Annual training only | Low | Security Training |
Formula Used
- Applicable Controls = Total Controls − Not Applicable Controls
- Effective Implemented Controls = Fully Implemented + 0.50 × Partially Implemented + 0.25 × Planned/In Review
- Status Coverage (%) = Effective Implemented Controls ÷ Applicable Controls × 100
- Maturity Score (%) = Average of policy, technical, monitoring, and response maturity ÷ 5 × 100
- Critical Success Rate (%) = 100 − (Critical Failures ÷ Applicable Controls × 100)
- Assurance Score (%) = Average of evidence quality, test pass rate, and critical success rate
- Base Readiness (%) = 0.45 × Status Coverage + 0.30 × Maturity Score + 0.25 × Assurance Score
- Inherent Risk (%) = (Threat Exposure + Business Impact) ÷ 10 × 100
- Risk-Adjusted Coverage (%) = Base Readiness − 0.15 × Inherent Risk
- Gap Percentage (%) = 100 − Risk-Adjusted Coverage
- Priority Index = 0.55 × Gap Percentage + 0.25 × Critical Failure Rate + 0.10 × (100 − Test Pass Rate) + 0.10 × (100 − Evidence Quality)
- Residual Risk Score = ((0.60 × Gap Percentage) + (0.40 × Inherent Risk)) ÷ 20, clamped from 1 to 5
How to Use This Calculator
- Select the framework and define the scope you assessed.
- Enter total controls and exclude any controls that are not applicable.
- Distribute the applicable controls across fully implemented, partially implemented, planned/in review, and not implemented.
- Enter evidence quality and control testing pass rate as percentages.
- Score policy, technical, monitoring, and response maturity on a 0 to 5 scale.
- Rate threat exposure and business impact from 1 to 5 to reflect inherent risk.
- Click Analyze Gaps to generate readiness metrics, the priority index, residual risk score, and the Plotly chart.
- Download results as CSV or PDF for audit preparation, board reporting, or remediation tracking.
Frequently Asked Questions
1. What does this calculator measure?
It estimates how well your current safeguards align with your target control environment. It blends implementation status, maturity, evidence strength, testing success, and inherent risk into one readiness view.
2. Why are partially implemented controls discounted?
Partial controls lower risk, but they rarely deliver full design intent. Weighting them below fully implemented controls keeps the score realistic and highlights unfinished remediation work.
3. What is the difference between gap percentage and priority index?
Gap percentage shows missing or weakened coverage. Priority index adds urgency by considering critical failures, weak evidence, and weak testing, helping teams decide what to fix first.
4. How should I score maturity from 0 to 5?
Use 0 for nonexistent, 1 for ad hoc, 2 for repeatable, 3 for defined, 4 for managed, and 5 for optimized. Stay consistent across domains.
5. Can I use this for any framework?
Yes. The calculator is framework-agnostic. You can map control counts from common standards or internal baselines, as long as your scoring method remains consistent.
6. What should count as a critical failure?
A critical failure is a severe weakness in a key safeguard, test, or process that materially increases exposure. Examples include broken MFA, untested recovery, or missing privileged access reviews.
7. Why does inherent risk reduce the final coverage score?
High-threat or high-impact environments demand stronger controls. The adjustment prevents moderate control quality from appearing safer than it actually is in a more hostile environment.
8. How often should I rerun the analysis?
Run it after major audits, remediation waves, architecture changes, or quarterly governance reviews. Frequent updates show whether your control program is improving over time.