| Period | Observed value | Note |
|---|---|---|
| T1 | 120 | Start |
| T2 | 128 | History |
| T3 | 133 | History |
| T4 | 129 | History |
| T5 | 142 | History |
| T6 | 150 | History |
| T7 | 147 | History |
| T8 | 155 | History |
| T9 | 160 | History |
| T10 | 158 | History |
| T11 | 170 | History |
| T12 | 176 | Latest |
- Paste your series in the values box (oldest to newest).
- Choose a method: use Naive for a baseline, Holt for trend, and Holt‑Winters for seasonality.
- Set horizon to match your planning window (days, weeks, months).
- Tune parameters: raise alpha for fast shifts; lower phi to damp long-range trend.
- Review metrics (MAE/RMSE/MAPE) to compare approaches.
- Export results using CSV or PDF for sharing.
Horizon design and sampling frequency
Multistep forecasting starts by matching the horizon to how decisions are made. If you plan weekly staffing, a 6–12 step weekly horizon is usually more actionable than daily noise. For monthly revenue planning, 3–6 steps can capture near‑term momentum while limiting compounding uncertainty. Keep the series ordered oldest to newest, and avoid mixing granularities; a single missing cycle can distort trend and season estimates.
Method selection and baseline discipline
Use Naive or Seasonal Naive as a reality check before tuning smoothing. If a sophisticated method cannot beat the baseline MAE or RMSE, the added complexity rarely pays off in production. Moving Average works well for short horizons when volatility is high and trends are weak. Trend smoothing is a strong default when you expect persistent growth or decline, while additive seasonality is best when cycles repeat with roughly constant amplitude.
Parameter tuning with stability in mind
Alpha updates the level; values around 0.10–0.40 often balance responsiveness and stability. Beta controls trend adaptation; smaller values reduce overreaction to transient spikes. Phi damps long‑range trend; 0.80–0.95 commonly prevents runaway forecasts when the horizon is long. For seasonal smoothing, set season length to your true cycle (7 daily, 12 monthly, 24 hourly) and keep gamma moderate so seasonality changes slowly.
Quality signals you can compare quickly
This calculator reports MAE, RMSE, and MAPE from one‑step residuals to help you compare methods on the same data. MAE is robust to outliers, RMSE penalizes large misses, and MAPE is interpretable for scale‑free reporting when values are non‑zero. When metrics disagree, favor the one aligned to your cost function: for service levels, large errors may be disproportionately expensive, making RMSE the better guide.
Intervals and decision thresholds
Forecast intervals use a practical error‑growth rule where uncertainty scales with √h, so bands widen as you look further ahead. Treat the lower and upper bounds as planning rails, not guarantees. If negative values are impossible, clipping at zero keeps scenarios realistic. A useful operational practice is to define triggers: for example, if the upper bound exceeds capacity by 10%, pre‑approve an expansion plan; if the lower bound drops below demand coverage, tighten spend.
FAQ 1: How many data points should I enter?
Enter at least 5 values, but aim for 20+ points when possible. For seasonal methods, provide at least two full cycles (for example, 24 points for a 12‑month season) to stabilize the seasonal pattern.
FAQ 2: When should I use additive seasonality?
Choose additive seasonality when peaks and troughs repeat with similar absolute size across time. If seasonal swings grow with the level, a multiplicative approach may fit better, but this tool focuses on stable‑amplitude cycles.
FAQ 3: How do I pick the season length value?
Set season length to the number of observations per repeating cycle: 7 for daily weekly patterns, 12 for monthly yearly patterns, 24 for hourly daily patterns. If unsure, test a few candidates and compare MAE or RMSE.
FAQ 4: Why do prediction intervals get wider at longer horizons?
Each step adds uncertainty from residual errors. A common approximation assumes variance grows with the horizon, so the standard error scales with √h. This is why the bounds expand as step increases.
FAQ 5: My MAPE looks strange; what does that mean?
MAPE can be unstable when actual values are near zero. If your series includes zeros or tiny numbers, rely more on MAE and RMSE, or rescale the data. Compare methods using the same metric consistently.
FAQ 6: Why are the download buttons disabled sometimes?
Downloads require a computed result in the current session. Run the forecast once, then use CSV or PDF to export the same table and settings. Refreshing or opening a new tab may clear the saved result.