Evaluate response preparedness using weighted controls. Compare people, processes, technology, exercises, and governance. Turn readiness gaps into practical improvement priorities.
Use scores from 0 to 100 for each readiness domain. Adjust weights to reflect business priorities and threat exposure.
The chart compares domain-level scores and highlights response capability distribution.
| Area | Example Score | Example Weight | Interpretation |
|---|---|---|---|
| Governance | 82 | 10 | Roles, authority, and escalation are well defined. |
| Detection | 76 | 12 | Monitoring is broad, but tuning can still improve. |
| Exercises | 54 | 8 | Simulations occur infrequently and lessons close slowly. |
| Recovery | 74 | 8 | Restoration plans exist and are partly tested. |
| Metrics | 59 | 6 | Core response KPIs exist, but dashboard maturity is limited. |
Weighted readiness score = Σ(domain score × domain weight) ÷ Σ(weights).
Operational adjustment = coverage bonus + staffing factor − detection penalty − recovery penalty − incident penalty.
Coverage bonus = monitoring coverage % × 0.10.
Staffing factor = min(10, responder staff ÷ critical assets × 40).
Detection penalty = min(25, MTTD hours × 0.8).
Recovery penalty = min(25, MTTR hours × 0.5).
Incident penalty = min(15, incidents per quarter × 1.2).
Final readiness score = weighted readiness score + operational adjustment, limited to a 0–100 range.
It estimates how prepared an organization is to detect, contain, investigate, communicate, and recover from cybersecurity incidents using weighted control and operational inputs.
Weights let you emphasize what matters most in your environment. A regulated business may weight governance and compliance higher, while a cloud-heavy team may weight detection and recovery more.
Scores above 85 suggest strong maturity. Scores from 70 to 84 show managed readiness. Scores below 55 indicate significant gaps needing near-term remediation.
Use evidence such as policy reviews, control testing, tabletop results, service-level performance, audit findings, and responder feedback. Higher scores should reflect repeatable, tested practices.
Longer detection and response times often indicate weaker operational performance. The calculator subtracts penalties so slow real-world execution lowers the final readiness result.
Yes. It can help structure discussions, summarize priorities, and support trend reporting. It should complement, not replace, detailed security assessments and formal control testing.
Quarterly reviews are common. Review sooner after major incidents, tooling changes, mergers, new regulations, or significant shifts in infrastructure and threat exposure.
Yes. You can change weights, thresholds, benchmarks, or penalty factors to align the model with internal policy, sector guidance, and business-critical systems.