Audit predictions with clear metrics and residual checks. Test model quality, bias, and stability quickly. Turn raw outputs into reliable insights for better decisions.
| Observation | Actual | Predicted | Residual |
|---|---|---|---|
| 1 | 120 | 118 | 2 |
| 2 | 135 | 138 | -3 |
| 3 | 142 | 140 | 2 |
| 4 | 158 | 155 | 3 |
| 5 | 165 | 168 | -3 |
| 6 | 180 | 176 | 4 |
You can paste these values into the calculator to test the workflow quickly.
Residual (eᵢ) = Actualᵢ − Predictedᵢ
MAE = Σ|eᵢ| / n
MSE = Σ(eᵢ²) / n
RMSE = √MSE
MAPE = [Σ(|eᵢ| / |Actualᵢ|) × 100] / valid nonzero actual values
sMAPE = [Σ(2 × |eᵢ| / (|Actualᵢ| + |Predictedᵢ|)) × 100] / n
R² = 1 − (SSE / SST)
Adjusted R² = 1 − [(1 − R²) × (n − 1) / (n − p − 1)]
Mean Error = Σ(Actualᵢ − Predictedᵢ) / n
Residual Std. Error = √[SSE / (n − p − 1)]
Durbin-Watson = Σ(eᵢ − eᵢ₋₁)² / Σeᵢ²
Here, n is the number of observations, p is the number of predictors, SSE is the sum of squared errors, and SST is the total sum of squares.
It measures how close predicted values are to actual values. The tool reports error metrics, fit scores, residual diagnostics, and trend visuals for deeper evaluation.
MAE shows the average absolute miss, while RMSE penalizes larger mistakes more heavily. Using both helps you judge typical error and sensitivity to outliers.
Use adjusted R² when comparing models with different numbers of predictors. It penalizes extra variables that do not improve model quality enough.
MAPE divides by actual values. If actual values include zeros, those rows cannot be used safely for MAPE. The tool skips zero actuals automatically.
A negative mean error means predictions tend to be higher than actual values overall. That suggests the model is biased toward overprediction.
Durbin-Watson checks whether residuals are correlated in sequence. Values near 2 usually suggest low autocorrelation, while extremes can signal pattern problems.
Yes. Run the calculator once for each model using the same actual values. Then compare MAE, RMSE, R², bias, and residual behavior.
No. A high R² can still hide bias, large outliers, or unstable residual patterns. Always review error metrics and diagnostic behavior together.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.