Calculator Inputs
Use commas, spaces, tabs, or new lines between values. Labels are optional. Seasonality powers the MASE benchmark.
Example Data Table
This sample demonstrates how actual demand and predicted demand align over multiple periods.
| Period | Actual | Forecast | Error | Absolute Error |
|---|---|---|---|---|
| P1 | 120 | 118 | -2 | 2 |
| P2 | 128 | 130 | 2 | 2 |
| P3 | 133 | 131 | -2 | 2 |
| P4 | 140 | 142 | 2 | 2 |
| P5 | 146 | 145 | -1 | 1 |
| P6 | 150 | 149 | -1 | 1 |
Formula Used
Forecast accuracy analysis compares predicted values with observed outcomes. These measures help machine learning teams evaluate magnitude, direction, and stability of prediction error.
Error = Forecast - Actual
Absolute Error = |Forecast - Actual|
Squared Error = (Forecast - Actual)²
ME = Σ(Forecast - Actual) / n
MAE = Σ|Forecast - Actual| / n
MSE = Σ(Forecast - Actual)² / n
RMSE = √MSE
MAPE = [Σ(|Forecast - Actual| / |Actual|) × 100] / valid periods
sMAPE = [Σ(200 × |Forecast - Actual| / (|Actual| + |Forecast|))] / valid periods
WAPE = [Σ|Forecast - Actual| / Σ|Actual|] × 100
Bias % = [Σ(Forecast - Actual) / Σ|Actual|] × 100
Tracking Signal = Σ(Forecast - Actual) / MAE
R² = 1 - [Σ(Actual - Forecast)² / Σ(Actual - Mean Actual)²]
MASE = MAE / Mean Absolute Seasonal Naive Error
Forecast Accuracy % = max(0, 100 - MAPE)
Lower MAE, RMSE, MAPE, sMAPE, WAPE, MASE, and Theil’s U1 generally indicate stronger forecasting performance. Bias close to zero suggests balanced over and under prediction.
How to Use This Calculator
- Enter the actual series in the first box.
- Enter matching forecast values in the second box.
- Optionally add labels such as days, weeks, months, or batch names.
- Set a seasonality period if you want MASE against a seasonal naive benchmark.
- Choose your tolerance percent to measure how often errors stay within an acceptable range.
- Set decimal precision for cleaner reporting.
- Press Calculate Accuracy to display summary cards, detailed tables, and the graph.
- Use the CSV or PDF buttons to export the final report.
Frequently Asked Questions
1. What does forecast accuracy mean?
Forecast accuracy shows how closely predictions match real outcomes. It measures average error, percentage error, bias, and trend alignment so you can judge model reliability.
2. Which metric is best for machine learning forecasts?
No single metric fits every case. MAE is intuitive, RMSE penalizes larger misses, MAPE explains percentage error, and MASE compares against a naive baseline.
3. Why can MAPE return N/A?
MAPE uses actual values in the denominator. If an actual value equals zero, the percentage error becomes undefined for that row, so the calculator skips it.
4. What does bias tell me?
Bias shows whether forecasts systematically overestimate or underestimate. Positive bias means forecasts trend high. Negative bias means they trend low over time.
5. When should I use MASE?
Use MASE when you want scale-free comparison across products, regions, or models. It is especially useful when actual values differ widely or include low volumes.
6. What does directional accuracy measure?
Directional accuracy checks whether the forecast correctly predicts the movement direction between periods. It is helpful when change direction matters more than exact value size.
7. Why include tolerance percentage?
Tolerance percentage shows how often prediction error stays inside an acceptable operational band. It is practical for service planning, inventory, and alert thresholds.
8. Can I use this for demand, traffic, or energy forecasting?
Yes. The calculator works for any matched actual and predicted series, including demand planning, server traffic, sales, energy load, and model validation tasks.