Overrun Risk Calculator

Quantify overrun likelihood using realistic uncertainty inputs fast. Score drivers across scope, resources, and change. Export results, share assumptions, and track mitigation progress weekly.

Inputs

Baseline approved budget (BAC).
Current estimate at completion (EAC).
Typical variability around the cost estimate.
Baseline duration (weeks or months).
Current predicted duration (same unit as planned).
Typical variability around the schedule estimate.
Allocated buffer for unknowns and change.
Integration depth, novelty, and dependencies.
Likelihood of churn in requirements and deliverables.
Supplier stability, lead times, and contract risk.
Team capacity constraints and competing priorities.
Domain familiarity and delivery track record.
Typical past overrun for similar work.
Reset

Example data table

Scenario Planned cost Forecast cost Planned duration Forecast duration Uncertainty (cost/schedule) Driver profile
Stable upgrade 120,000 126,000 12 12.5 8% / 7% Low volatility, strong experience
Mixed dependencies 250,000 285,000 20 23 12% / 10% Moderate complexity and vendor exposure
New build 900,000 1,150,000 30 38 18% / 16% High volatility, tight resources, new vendors

Formula used

This calculator blends variance indicators with driver ratings to estimate an overrun probability.
  • Cost variance (%) = (Forecast cost − Planned cost) ÷ Planned cost × 100
  • Schedule variance (%) = (Forecast duration − Planned duration) ÷ Planned duration × 100
  • z-score = variance ratio ÷ uncertainty (1σ)
  • Risk probability = 1 ÷ (1 + e−x) where x combines z-scores and drivers
  • Risk score = round(Probability × 100)
Driver ratings are normalized from 1–5 to 0–1. Higher contingency and experience reduce the combined risk factor.

How to use this calculator

  1. Enter baseline planned cost and planned duration from your approved plan.
  2. Enter current forecast cost and forecast duration from your latest estimate.
  3. Set uncertainty as typical one-sigma variability for similar work.
  4. Rate drivers (1–5) based on evidence, not optimism.
  5. Click calculate to view risk score above the form.
  6. Download CSV or PDF to share assumptions and results.

Cost and schedule variance signals

Cost variance and schedule variance translate plan drift into comparable percentages. When forecast cost exceeds planned cost by 10%, exposure grows quickly because remaining work usually carries higher uncertainty. Pair the percentage with the z-score, which divides variance by typical one-sigma uncertainty. A z-score near 1 suggests manageable deviation, while values above 2 indicate atypical movement that often precedes change requests, rework, or procurement delays.

Uncertainty and contingency discipline

Uncertainty inputs represent how noisy your estimates are, not how confident you feel. Historical estimating error, supplier lead-time spread, and productivity variation are good evidence sources. Contingency is treated as a risk reducer because it absorbs small shocks before scope tradeoffs begin. If contingency is below the suggested buffer, consider phased funding, tighter change control, or earlier design freeze rather than relying on late overtime.

Driver ratings and root causes

Driver ratings summarize qualitative conditions that standard cost models miss. High scope volatility elevates risk because requirements churn forces design iteration, testing repeats, and contract revisions. Complexity increases integration defects and coordination overhead. Vendor risk captures single-source exposure, quality escapes, and shipping variability. Resource tightness reflects multitasking, onboarding churn, and limited specialized roles. Team experience offsets these by improving estimation, sequencing, and recovery actions.

Interpreting probability and risk score

The probability is produced by a logistic mapping of the combined drivers and variance indicators. It is best used for ranking projects and triggering governance thresholds, not predicting a single outcome. A score below 40 typically supports routine monitoring, 40–69 suggests targeted mitigation plans, and 70+ warrants sponsor review. Track the score weekly; a rising trend is more actionable than one isolated reading.

Using results for decisions and reporting

Combine the score with the impact estimates to communicate both likelihood and consequence. Expected cost overrun highlights the immediate funding gap implied by the forecast, while expected schedule overrun flags delivery slippage that can create downstream penalties. Export the report to document assumptions, then update inputs after each milestone. If variance drops but drivers stay high, focus on stabilizing scope and suppliers to prevent rebound. For portfolios, compare scores across workstreams to prioritize leadership attention. Align buffers with contract terms and reserve policies. When mitigations complete, reduce volatility ratings to reflect evidence, and keep a clear audit trail for future lessons learned.

FAQs

What does a high z-score mean?

A high z-score means the variance is large relative to your typical estimating noise. Values above 2 suggest the deviation is unusual and may indicate scope change, productivity loss, or supplier disruption.

Should I use weeks or months for duration?

Use any time unit, but keep planned and forecast duration in the same unit. The calculator uses ratios, so weeks, months, or days work as long as you stay consistent.

How do I choose uncertainty percentages?

Base uncertainty on evidence: past estimate errors, vendor lead-time spread, productivity variability, and design maturity. Start conservative, then tighten the percentages as requirements stabilize and more data becomes available.

Does contingency remove risk?

No. Contingency reduces the chance that small shocks become overruns, but major scope changes can still exceed buffers. Treat contingency as a control measure and keep monitoring driver ratings and variances.

Why is team experience a risk reducer?

Experienced teams typically estimate better, sequence work efficiently, and recover faster from surprises. They also spot requirement gaps earlier, which reduces rework and limits late-stage change requests.

How often should I recalculate?

Recalculate at least weekly, and after any milestone, change request, or forecast update. Trends matter: a steadily rising score is a stronger trigger for action than a single high reading.

Related Calculators

Timer Prescaler CalculatorBaud Rate CalculatorUART Timing CalculatorPWM Duty CalculatorInterrupt Latency CalculatorRTOS Load CalculatorRAM Usage CalculatorHeap Size CalculatorPower Consumption CalculatorBattery Life Calculator

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.