Regression Score Online Calculator

Turn predictions into clear scores for decisions fast. Upload CSVs or type lists in seconds. Download tables, share PDFs, and track improvements over time.

Calculator

Responsive grid: 3 columns on large screens, 2 on medium, 1 on mobile.
White theme
Pick how you want to provide paired values.
Used for display and exported reports.
Avoids divide-by-zero for percent errors.
A small positive value like 1e-8.
Separate by commas, spaces, or new lines.
Must match the count of actual values.

Example data table

Use this example to understand how scores change with error size.
# Actual Predicted Error |Error|
13.02.80.20.2
24.04.2-0.20.2
35.04.90.10.1
46.06.4-0.40.4
57.06.80.20.2

Formula used

R² score
R² = 1 − (Σ(y − ŷ)²) / (Σ(y − ȳ)²)
Measures variance explained by predictions.
MAE and MSE
MAE = (1/n) Σ|y − ŷ|
MSE = (1/n) Σ(y − ŷ)²
Error magnitudes, with MSE emphasizing outliers.
RMSE and percent errors
RMSE = √MSE
MAPE = (100/n) Σ |(y − ŷ) / y|
sMAPE = (100/n) Σ 2|y − ŷ|/(|y|+|ŷ|)
Notes: If all actual values are identical, the calculator outputs R² = 1 for perfect predictions, otherwise R² = 0.

How to use this calculator

  1. Choose Manual lists or CSV upload.
  2. Provide paired actual and predicted values.
  3. Set decimal places and MAPE handling for your data.
  4. Press Calculate to see metrics above the form.
  5. Use the download buttons to export CSV or PDF reports.

Model evaluation in practical pipelines

Regression scoring turns paired observations into decisions. A good workflow records R², RMSE, MAE, MSE, median absolute error, and percent errors per model run. When you benchmark weekly forecasts, track how RMSE changes as volume grows. For example, a 0.50 RMSE shift on a target around 5.0 is meaningful.

Interpreting R² beyond a single number

R² compares your model to a simple baseline that predicts the mean of actual values. Values near 1.00 indicate strong explanatory power, while values near 0.00 suggest little improvement over the baseline. Negative R² can happen when predictions are worse than the mean. Pair R² with an error metric, because a high R² can still hide a bias that shifts prediction. If all actual values are constant, variance is zero, so the calculator reports R² as 1 for perfect matches, otherwise 0.

Choosing error units: MAE, MSE, RMSE

MAE reports typical absolute deviation in the same units as the target, which is easy to communicate to stakeholders. MSE and RMSE punish large misses more strongly; RMSE returns to target units while preserving the outlier penalty. Median absolute error highlights central performance when occasional spikes exist, and comparing it with MAE reveals tail risk. Use MAE for stable service expectations and RMSE when rare high errors have high cost.

Percent metrics: MAPE and sMAPE for scale shifts

MAPE is intuitive, but it fails when actual values are zero or near zero. This calculator lets you exclude zero-actual rows or apply an epsilon denominator to keep the metric finite. sMAPE is symmetric and bounded, helping comparisons across varying scales, and it remains defined when predictions and actuals are both small. Review MAPE alongside sMAPE when the target distribution is skewed.

From raw pairs to shareable reports

You can paste lists or upload CSV data with automatic delimiter detection and optional column selection. The results table shows row errors, absolute errors, and squared errors, which helps diagnose bias and outliers quickly. Correlation (Pearson r) is included to summarize association, but it should not replace error checks. Export CSV for further analysis, or generate a PDF for audits and stakeholder updates, keeping evaluation reproducible.

FAQs

Which scores does this calculator compute?

It calculates R² plus MAE, MSE, RMSE, median absolute error, MAPE, sMAPE, and correlation from paired actual and predicted values. A row table shows errors, and you can export the summary and rows to CSV or PDF.

Why can the R² score be negative?

R² compares your model against predicting the mean of actual values. If your predictions produce larger residual error than that baseline, the numerator outweighs explained variance and R² becomes negative, indicating worse-than-baseline performance.

Should I focus on MAE or RMSE?

Use MAE for an easy-to-explain typical absolute miss in target units. Use RMSE when large errors are especially costly, because squaring penalizes outliers more. Report both when you need a balanced view of average and tail risk.

How does MAPE handle zero actual values?

MAPE divides by the actual value, so zeros cause issues. Choose “Exclude zero actuals” to skip those rows, or choose “Use epsilon” to divide by a small floor value. sMAPE stays defined when values are small.

Can I upload CSVs with different column names?

Yes. The CSV mode attempts to auto-detect common headers like actual, y_true, predicted, or y_pred. You can also provide a header name or a zero-based column index for each field, and select the delimiter if needed.

Is my data stored anywhere?

The script computes metrics for the current page request and does not write values to a database or file. Uploaded CSVs are read from a temporary upload location during processing, and the tool only generates downloads in your browser.

Related Calculators

Model Fit ScoreRegression R SquaredAdjusted Model FitExplained Variance ScoreRegression Fit IndexModel Accuracy ScoreRegression Performance ScoreR Squared OnlineAdjusted R2 CalculatorModel Fit Calculator

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.