Example data table
This sample includes a zero Actual value to illustrate handling options.
| # | Actual | Forecast | APE (%) |
|---|---|---|---|
| 1 | 100 | 90 | 10.00 |
| 2 | 120 | 110 | 8.33 |
| 3 | 130 | 140 | 7.69 |
| 4 | 0 | 10 | — (Actual is zero) |
| 5 | 150 | 160 | 6.67 |
Formula used
Absolute Percentage Error (APE) for each row:
Mean Absolute Percentage Error (MAPE) across n valid rows:
When Actual is near zero, the denominator becomes unstable, so this calculator provides exclusion, epsilon replacement, or invalid marking.
How to use this calculator
- Pick an input mode: two lists, or one pair per line.
- Paste numeric values for Actual and Forecast.
- Select a policy for zero or near-zero Actual values.
- Choose decimal places and whether to show the row table.
- Click Calculate to view results above the form.
- Use Download CSV or Download PDF to export.
Why MAPE matters in evaluation
Mean Absolute Percentage Error (MAPE) summarizes average forecast deviation as a percentage of actuals. Because it is scale-free, teams can compare products, regions, or time horizons without converting units. It is especially useful when business reviews ask, “How wrong are we, on average?” However, MAPE can overemphasize low-volume periods, so it should be read alongside absolute-error metrics and segment-level breakdowns. Pair metrics with confidence intervals and error distributions to communicate uncertainty, not just a single average to leadership.
Understanding the computation pipeline
This calculator computes Absolute Percentage Error (APE) for each row, then averages APE across valid pairs. It also reports median APE to reduce the influence of extreme outliers, plus MAE and RMSE for magnitude-sensitive tracking. sMAPE is included as a bounded alternative, and WAPE provides a volume-weighted view that aligns well with operational cost. Ensure actual and forecast timestamps are aligned and missing values are handled consistently.
Handling zeros and near-zero actuals
MAPE divides by the actual value, so zeros and tiny actuals can create undefined or misleading percentages. The tool offers three policies: exclude near-zero rows, replace the denominator with an epsilon threshold, or mark those rows invalid. Excluding is conservative for audits, while epsilon supports sparse or intermittent demand. Use a documented epsilon and keep it constant across experiments, environments, and model versions.
Interpreting outputs for model decisions
Lower MAPE indicates improved relative accuracy, but it is not a full risk picture. If MAPE improves while RMSE worsens, the model may be optimizing small values and failing on peaks. Compare mean error and MPE to detect systematic bias, such as consistent over-forecasting. Use the row-level table to spot regime shifts, seasonality misses, and data quality issues, then validate with holdout periods.
Operational use and reporting
Paste values as two lists or as line-by-line pairs, select a zero-handling approach, and calculate to display results above the form. Export CSV for spreadsheets, monitoring dashboards, and reproducible experiments. Export PDF for approvals and non-technical stakeholders. For consistent reporting, evaluate on the same horizon, use the same filters, and record excluded counts to explain changes over time. Set alert thresholds on WAPE for business impact.
FAQs
1) When should I avoid using MAPE?
Avoid MAPE when actuals are frequently zero or extremely small. In those cases, percentage errors can explode and mislead comparisons. Consider WAPE, sMAPE, MAE, or RMSE, and report excluded counts transparently.
2) What does epsilon mode change?
Epsilon mode replaces tiny actual denominators with a fixed threshold. That prevents division-by-zero and stabilizes percentages for sparse series. Choose an epsilon aligned to your measurement scale, and keep it consistent across all model evaluations.
3) Why include sMAPE and WAPE?
sMAPE bounds the percentage using both actual and forecast magnitudes, reducing extreme values. WAPE weights errors by total actual volume, making it closer to business impact when high-volume periods matter more.
4) Can I use negative actual values?
Yes, but interpret percentage metrics carefully. This tool uses magnitude for APE, so sign does not flip APE. Bias metrics like MPE reflect directionality. If negatives are meaningful, compare MAE and RMSE alongside MAPE.
5) How many rows do I need for stable results?
More is better. For noisy series, aim for dozens of points or more and evaluate across multiple windows. Small samples can swing MAPE dramatically, especially if a few rows are near zero or outliers dominate.
6) How do I tell if my model is biased?
Check Mean Error and MPE. Persistent positive Mean Error often indicates over-forecasting, while negative suggests under-forecasting. Confirm by reviewing the row-level table and plotting residuals by time, segment, or value range.