Enter Model Data
Paste values with commas, spaces, semicolons, or new lines. The calculator supports optional sample weights and custom metric scoring weights.
Example Data Table
This sample shows how actual values, predictions, and errors line up before the score is calculated.
| # | Actual | Predicted | Residual | Absolute Error |
|---|---|---|---|---|
| 1 | 100 | 98 | 2 | 2 |
| 2 | 112 | 110 | 2 | 2 |
| 3 | 120 | 123 | -3 | 3 |
| 4 | 130 | 129 | 1 | 1 |
| 5 | 142 | 140 | 2 | 2 |
| 6 | 155 | 157 | -2 | 2 |
| 7 | 168 | 166 | 2 | 2 |
| 8 | 180 | 183 | -3 | 3 |
Formula Used
Residual: ei = yi - ŷi
Weighted Mean: ȳw = Σ(wiyi) / Σwi
Weighted SSE: Σ[wi(yi - ŷi)²]
R²: 1 - SSE / SST
SST: Σ[wi(yi - ȳw)²]
Adjusted R²: 1 - (1 - R²)(n - 1) / (n - p - 1)
MAE: Σ(wi|ei|) / Σwi
MSE: Σ(wiei²) / Σwi
RMSE: √MSE
MAPE: Σ[wi|ei / yi|] / Σwi × 100, excluding rows where actual equals zero.
Composite Fit Score:
First, each metric becomes a normalized sub-score on your chosen scale.
R² and adjusted R² are clipped between 0 and 1, then multiplied by the scale.
RMSE and MAE are converted into efficiency scores using the spread and magnitude of actual values.
The final score is the weighted average of all available sub-scores, using your custom scoring weights.
How to Use This Calculator
- Paste the actual observed values into the first box.
- Paste the matching predicted values into the second box.
- Add optional sample weights when some rows should matter more.
- Enter the predictor count if you want a meaningful adjusted R².
- Set the score scale and MAPE cap for your preferred scoring system.
- Tune metric weights to emphasize fit, stability, or error control.
- Click the calculate button to see the score, graph, and full breakdown.
- Use the export buttons to save the summary and row-level results.
Frequently Asked Questions
1) What does the regression fit score represent?
It is a combined performance score. This page merges fit strength and error behavior into one normalized number, making model comparison faster and easier.
2) Why not rely on R² alone?
R² shows explained variance, but it can hide large prediction errors. MAE, RMSE, and MAPE reveal how far predictions miss in practical terms.
3) Why can MAPE show as unavailable?
MAPE divides by actual values. When actual values equal zero, percentage error is undefined, so those rows are excluded from MAPE calculations.
4) When does adjusted R² become unavailable?
Adjusted R² needs enough observations beyond the predictor count. If n minus p minus 1 is zero or negative, the adjusted form is not valid.
5) When should I use sample weights?
Use them when some observations are more important, more reliable, or more frequent than others. Weighted scoring can better reflect business or modeling priorities.
6) Why are both MAE and RMSE included?
MAE gives the average miss size. RMSE penalizes large misses more heavily, so it is useful when outliers matter more.
7) Can I compare different models with this page?
Yes. Use the same dataset, score scale, MAPE cap, and metric weights across models. That keeps the comparison fair and consistent.
8) What score range is usually considered good?
Higher is better, but acceptable ranges depend on noise, scale, and use case. As a broad guide, 75 plus is strong and 90 plus is excellent.