Turn point forecasts into actionable probability distributions today. Explore intervals, quantiles, and event chances easily. Measure calibration and make safer decisions with confidence always.
Use this sample to verify your setup and expected outputs.
| Distribution | Mean / μ(log) | SD / σ(log) | Threshold | Expected P(event) | Typical 90% Interval |
|---|---|---|---|---|---|
| Normal | 100 | 15 | 120 | ≈ 9.18% | [75.33, 124.67] |
| Lognormal | 4.60 | 0.25 | 120 | Depends on scale | Positive-only range |
Single-number forecasts hide risk. A distribution communicates expected value and uncertainty, so teams can price safety buffers, set alert thresholds, and quantify downside exposure. A calibrated 90% interval should contain outcomes about nine times out of ten, which is more actionable than a vague “high confidence” label.
The mean is the central tendency of your point forecast. The standard deviation controls spread and should be estimated from recent residuals by horizon, not intuition alone. If the target is strictly positive and skewed, a lognormal assumption can better match demand, latency, or cost behavior. Use consistent units and avoid mixing log-space parameters with real-scale expectations.
Each confidence level maps to two quantiles: Q(α/2) and Q(1−α/2), where α = 1 − confidence. Interval width is a quick proxy for uncertainty; narrower is preferable only if coverage remains reliable. Compare widths across candidate models using the same quantile set. Median and interquartile range summarize typical dispersion, while 5th and 95th percentiles reveal tail risk relevant to service levels.
When you provide an observed value, the tool returns proper scoring rules that reward honest uncertainty. Negative log-likelihood strongly penalizes overconfident densities that miss outcomes. Pinball loss evaluates quantile forecasts and exposes asymmetric errors, such as consistent underprediction in upper tails. Brier score evaluates threshold events, enabling principled alert tuning and decision calibration. For normal forecasts, CRPS gives a single, scale-aware accuracy number.
Use event probability to estimate the chance demand exceeds a staffing limit or costs exceed a budget cap. Use the median for typical planning, then choose upper quantiles to size capacity and contingency. Export intervals and quantiles to reports, and monitor score trends weekly. Rising scores, widening intervals, or shifted probabilities can signal drift, prompting retraining, feature refresh, or a new uncertainty model. For backtesting, store observed values alongside forecast parameters and recompute scores on a rolling window. If 90% intervals capture only 75% of outcomes, inflate SD or recalibrate. If they capture 99%, you may be too conservative and can tighten uncertainty to improve sharpness without sacrificing coverage. over time.
Use recent forecast errors for the same horizon. Compute residuals (actual minus mean forecast), then use their standard deviation. Update regularly, and consider separate SD values for weekdays, seasons, or segments if error behavior changes.
Choose lognormal when the target cannot be negative and shows right skew, such as demand, durations, or costs. Enter μ and σ on the log scale; the tool then produces positive-only quantiles and intervals.
Intervals widen when SD is large or confidence is high. Check units, horizon alignment, and outliers in residuals. If your model is calibrated and still wide, the process may truly be volatile, and operational buffers should reflect that.
Lower is better. It penalizes forecasts that assign low density to what actually happened, especially when the distribution is narrow. Compare scores across models on the same dataset; avoid comparing across different targets or units.
Pinball loss evaluates quantiles. If upper-tail losses are consistently high, the model underestimates extreme outcomes. Adjust features, recalibrate, or increase uncertainty. Use multiple τ values to diagnose where the distribution misses.
Treat it as a risk estimate for crossing a threshold, like exceeding capacity. Combine it with impact to form expected loss. Track Brier score to ensure probabilities are calibrated; poor calibration can cause too many false alarms or missed events.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.