Inputs
Formula used
How to use this calculator
- Select a model that matches your data and context.
- Enter base parameters for that model.
- Set exposure and control effectiveness carefully.
- Optional: enable simulation to see uncertainty bands.
- Press Calculate to view results above the form.
- Download CSV or PDF to document the scenario.
Example data table
| Scenario | Model | Key inputs | Base probability | Adjusted probability | Expected loss |
|---|---|---|---|---|---|
| Peak traffic outage | Logistic score | score 6, b0 -3, b1 0.8 | ≈ 37.8% | ≈ 29.7% | ≈ 11,880 |
| Recurring API errors | Rate-based | λ 0.35, t 6 | ≈ 87.8% | ≈ 79.0% | ≈ 31,600 |
| Fraud signal spike | Bayesian update | p0 0.15, LR 2.5 | ≈ 30.6% | ≈ 23.6% | ≈ 9,440 |
Notes
- Use controls on odds when combining multiple multipliers.
- Keep exposure near 1 for a neutral baseline.
- If you have historical labels, fit intercept and slope from data.
Inputs that drive probability and loss
This calculator separates likelihood from consequence for clarity. Enter impact (money, hours, or points) and detection reduction to reflect loss avoided by monitoring and response. If impact is 50,000 and detection reduction is 20%, mitigated impact becomes 40,000. Exposure multiplier represents operating intensity: 1.0 is normal, 1.5–3.0 can represent peak demand, broader access, or higher threat activity.
Choosing a base probability model
Use logistic scoring when you have a score or features. With intercept −3.0 and slope 0.8, score 6 gives a base probability near 37.8%. Use the rate model for repeatable events: p = 1 − e^(−λt). With λ = 0.35 per month and t = 6 months, base probability is about 87.8%. Use Bayesian updating for new evidence. Prior 0.15 with likelihood ratio 2.5 updates to roughly 30.6%.
Odds-based adjustments for controls and exposure
Adjustments are applied on odds to avoid distortion near 0% or 100%. Convert p to odds O = p/(1−p), multiply by exposure and by (1 − control%). A 30% control effectiveness multiplies odds by 0.70. Exposure 2.0 with 30% controls multiplies odds by 1.40, then converts back to an adjusted probability within valid bounds.
Simulation outputs and uncertainty ranges
Optional simulation summarizes uncertainty with mean, median, and a confidence interval. Logistic simulation adds normal noise to the score using your SD. Rate and likelihood-ratio uncertainty use lognormal variability driven by a coefficient of variation (CV). A practical baseline is 5,000 runs with a 90% interval. Wide intervals indicate inputs needing tighter measurement or better expert calibration.
Using results for decisions and documentation
Use adjusted probability for ranking and expected loss for sizing mitigations. Run scenarios by changing one driver at a time, such as control effectiveness from 30% to 50% or exposure from 1.0 to 1.8. CSV export supports tracking and audit trails; the PDF report supports stakeholder review. Record sources for each parameter—logs, experiments, or workshops—then revisit values when systems, controls, or threat conditions change. Risk bands map probability to action: Low under 20%, Moderate 20–49%, High 50–79%, Critical 80% and above. Align these thresholds with appetite statements and incident response playbooks for consistent decisions across teams.
FAQs
Which model should I use?
Choose logistic scoring when you have a risk score or features. Use the rate model for recurring event frequency over a time horizon. Use Bayesian updating when you start with a prior belief and incorporate new evidence via a likelihood ratio.
What does control effectiveness change?
Control effectiveness reduces odds, not raw probability. A 30% value multiplies odds by 0.70, representing prevention strength. This makes combined multipliers behave sensibly even when base probability is very low or very high.
How should I set the exposure multiplier?
Use 1.0 for baseline operations. Increase it when the system is exposed more often or more broadly, such as peak traffic, expanded user access, or heightened threat activity. Keep changes modest unless you have supporting data.
How is expected loss computed?
First, impact is reduced by detection reduction to get mitigated impact. Then expected loss equals adjusted probability multiplied by mitigated impact. It provides an interpretable single number for comparing scenarios and prioritizing mitigations.
What do the simulation percentiles mean?
The interval summarizes uncertainty in your inputs. The lower and upper values bound the adjusted probability at your chosen confidence level. If the interval is wide, focus on improving the most uncertain parameters, such as score SD or rate CV.
Can I export results for reporting?
Yes. CSV captures the numeric outputs and the full input set as JSON for auditability. PDF produces a compact report with probability, band, expected loss, and optional uncertainty summary for quick stakeholder review.