Forecast Accuracy Tool

Turn forecasts into clear accuracy insights for teams. Compare actuals, detect bias, and improve decisions. Download tables, charts, and reports in one click today.

Enter Data
Paste pairs as lines: Actual,Forecast
Tip: Use comma-separated values, or switch delimiter below.
Pick the character separating Actual and Forecast.
Use 1 for non-seasonal, 7 for weekly, 12 monthly, etc.
Each line must contain two numbers: actual and forecast.
Example Data
Period Actual Forecast
1120110
2135140
3128125
4142138
5150155
Copy these rows into the input to see how metrics behave.
Formula Used
Core definitions
  • Error eₜ = Aₜ − Fₜ
  • Absolute Error = |eₜ|
  • Squared Error = eₜ²
Accuracy metrics
  • MAE = (1/n) Σ|eₜ|
  • MSE = (1/n) Σ eₜ²
  • RMSE = √MSE
  • ME = (1/n) Σ eₜ (bias)
Percentage metrics
  • MAPE = (1/m) Σ |eₜ/Aₜ| × 100, only if Aₜ ≠ 0
  • sMAPE = (1/k) Σ 200|eₜ|/(|Aₜ|+|Fₜ|)
  • WAPE = Σ|eₜ| / Σ|Aₜ| × 100
  • MPE = (1/m) Σ (eₜ/Aₜ) × 100
Bias monitoring
  • CFE = Σ eₜ
  • Tracking Signal = CFE / MAD, and MAD = MAE
  • MASE = MAE / mean(|Aₜ − Aₜ₋ₚ|) for chosen period p
How to Use This Calculator
  1. Choose a delimiter that matches your pasted data.
  2. Set a seasonal period for MASE if your series repeats.
  3. Paste one pair per line: actual first, forecast second.
  4. Press Submit to see metrics above the form.
  5. Export CSV or PDF for sharing, audits, and reporting.

Why accuracy metrics matter in forecasting workflows

Forecasting is only useful when teams can trust the size, direction, and stability of errors over time. This tool turns paired actual and forecast values into repeatable evidence for planning, inventory, staffing, and budgeting. By summarizing errors across many periods, you can separate random noise from systematic bias and decide whether to adjust models, data inputs, or business assumptions.

Interpreting MAE, RMSE, and percentage measures

MAE reports the typical miss in original units, which helps operational teams translate accuracy into cost or capacity. RMSE increases when a few periods have very large mistakes, so it highlights risk and volatility. MAPE is intuitive as a percent, but it ignores rows where the actual value is zero; sMAPE and WAPE provide alternatives that remain usable across different scales and mixed magnitudes.

Bias diagnostics using ME, CFE, and tracking signal

Accuracy alone can hide directional problems. Mean Error (ME) indicates whether forecasts run high or low on average, while CFE accumulates those errors to show drift. Tracking Signal divides CFE by MAD, giving a standardized indicator of sustained bias. Large positive values typically suggest under-forecasting, and large negative values suggest over-forecasting, prompting review of assumptions and recent demand shifts.

Comparing performance with MASE and seasonal period

MASE scales MAE against a simple seasonal naive benchmark, using a user-chosen period p. When MASE is below one, your approach beats the naive baseline; above one means the baseline is hard to improve on. Setting p to 7 for daily data with weekly seasonality, or 12 for monthly data, makes comparisons fair across products, regions, and time horizons.

Turning results into actions and continuous improvement

Use the row-level table to spot outliers, promotions, stockouts, or one-time events that distort averages. Combine MAE or WAPE with business thresholds to define acceptable error bands per SKU or segment. If RMSE rises while MAE stays steady, prioritize reducing extreme misses. Export CSV for audits and collaboration, and share PDF summaries in review meetings to document decisions and track progress. Recalculate after each model update, and keep the same data window to ensure comparisons remain valid and stable.

FAQs

What input format does the tool accept?

Enter one pair per line as Actual and Forecast. Choose a delimiter such as comma, semicolon, pipe, or tab. Non‑numeric rows are skipped with an error message.

Why is my MAPE shown as N/A?

MAPE is undefined when actual values are zero because it divides by the actual. If many rows have zero actuals, rely on sMAPE or WAPE instead.

Which metric should I use for operational planning?

Use MAE for typical unit error, and WAPE for a weighted percent across many items. If extreme misses are costly, add RMSE to highlight tail risk.

How do I interpret tracking signal values?

Tracking Signal is CFE divided by MAD. Values far from zero indicate sustained bias. Positive usually means forecasts are too low, negative means too high.

What seasonal period should I set for MASE?

Set p to the length of your repeating cycle: 7 for weekly patterns in daily data, 12 for monthly patterns, or 1 if you do not expect seasonality.

Do exports include my full dataset?

Yes. CSV includes the metrics plus every row-level error. PDF includes the metrics and the first portion of rows for readability.

Built for practical forecasting reviews and continuous improvement.

Related Calculators

Moving Average CalculatorTime Series Forecast ToolStationarity Test ToolHolt Winters ToolSeasonal Index CalculatorDifferencing CalculatorAnomaly Detection ToolFourier Transform ToolSpectral Density ToolChange Point Detector

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.