Quantify overrun likelihood using realistic uncertainty inputs fast. Score drivers across scope, resources, and change. Export results, share assumptions, and track mitigation progress weekly.
| Scenario | Planned cost | Forecast cost | Planned duration | Forecast duration | Uncertainty (cost/schedule) | Driver profile |
|---|---|---|---|---|---|---|
| Stable upgrade | 120,000 | 126,000 | 12 | 12.5 | 8% / 7% | Low volatility, strong experience |
| Mixed dependencies | 250,000 | 285,000 | 20 | 23 | 12% / 10% | Moderate complexity and vendor exposure |
| New build | 900,000 | 1,150,000 | 30 | 38 | 18% / 16% | High volatility, tight resources, new vendors |
Cost variance and schedule variance translate plan drift into comparable percentages. When forecast cost exceeds planned cost by 10%, exposure grows quickly because remaining work usually carries higher uncertainty. Pair the percentage with the z-score, which divides variance by typical one-sigma uncertainty. A z-score near 1 suggests manageable deviation, while values above 2 indicate atypical movement that often precedes change requests, rework, or procurement delays.
Uncertainty inputs represent how noisy your estimates are, not how confident you feel. Historical estimating error, supplier lead-time spread, and productivity variation are good evidence sources. Contingency is treated as a risk reducer because it absorbs small shocks before scope tradeoffs begin. If contingency is below the suggested buffer, consider phased funding, tighter change control, or earlier design freeze rather than relying on late overtime.
Driver ratings summarize qualitative conditions that standard cost models miss. High scope volatility elevates risk because requirements churn forces design iteration, testing repeats, and contract revisions. Complexity increases integration defects and coordination overhead. Vendor risk captures single-source exposure, quality escapes, and shipping variability. Resource tightness reflects multitasking, onboarding churn, and limited specialized roles. Team experience offsets these by improving estimation, sequencing, and recovery actions.
The probability is produced by a logistic mapping of the combined drivers and variance indicators. It is best used for ranking projects and triggering governance thresholds, not predicting a single outcome. A score below 40 typically supports routine monitoring, 40–69 suggests targeted mitigation plans, and 70+ warrants sponsor review. Track the score weekly; a rising trend is more actionable than one isolated reading.
Combine the score with the impact estimates to communicate both likelihood and consequence. Expected cost overrun highlights the immediate funding gap implied by the forecast, while expected schedule overrun flags delivery slippage that can create downstream penalties. Export the report to document assumptions, then update inputs after each milestone. If variance drops but drivers stay high, focus on stabilizing scope and suppliers to prevent rebound. For portfolios, compare scores across workstreams to prioritize leadership attention. Align buffers with contract terms and reserve policies. When mitigations complete, reduce volatility ratings to reflect evidence, and keep a clear audit trail for future lessons learned.
A high z-score means the variance is large relative to your typical estimating noise. Values above 2 suggest the deviation is unusual and may indicate scope change, productivity loss, or supplier disruption.
Use any time unit, but keep planned and forecast duration in the same unit. The calculator uses ratios, so weeks, months, or days work as long as you stay consistent.
Base uncertainty on evidence: past estimate errors, vendor lead-time spread, productivity variability, and design maturity. Start conservative, then tighten the percentages as requirements stabilize and more data becomes available.
No. Contingency reduces the chance that small shocks become overruns, but major scope changes can still exceed buffers. Treat contingency as a control measure and keep monitoring driver ratings and variances.
Experienced teams typically estimate better, sequence work efficiently, and recover faster from surprises. They also spot requirement gaps earlier, which reduces rework and limits late-stage change requests.
Recalculate at least weekly, and after any milestone, change request, or forecast update. Trends matter: a steadily rising score is a stronger trigger for action than a single high reading.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.