Tune your regression models with corrected fit scores. See sensitivity to predictors, size, and noise. Download reports, share tables, and improve decisions confidently now.
| Scenario | n | p | R² | RSS | TSS | Adjusted R² | RMSE |
|---|---|---|---|---|---|---|---|
| Baseline linear model | 120 | 6 | 0.8420 | 215.4000 | 1362.7000 | 0.8334 | 1.3409 |
| Compact model | 120 | 3 | 0.8210 | 244.3000 | 1362.7000 | 0.8162 | 1.4261 |
| Over-parameterized model | 120 | 18 | 0.8690 | 178.5000 | 1362.7000 | 0.8440 | 1.2186 |
R² always rises when you add predictors, even if they are weak. Adjusted R² corrects this by scaling the unexplained variance by the available degrees of freedom. In this calculator, n is the number of observations and p is the number of predictors excluding the intercept. When p grows relative to n, the penalty becomes stronger, helping you avoid models that look good only because they are complex.
Residual Sum of Squares (RSS) measures total squared error in the target’s units. RMSE is the square root of RSS divided by n, giving an average error magnitude that is easier to interpret. Because RMSE depends on the outcome scale, compare RMSE only across models predicting the same target with the same preprocessing. A lower RMSE indicates tighter residuals, but also check residual plots for structure.
AIC and BIC add complexity penalties to a likelihood-style fit term based on ln(RSS/n). AIC uses 2k, while BIC uses k ln(n), usually penalizing complexity more as n grows. Use these criteria to rank candidate models fitted on the same data and response. Small differences can be noise; larger gaps suggest one model generalizes better with fewer unnecessary predictors.
The overall F-statistic summarizes whether the model explains variance better than a null model with only an intercept. It depends on R², p, and the residual degrees of freedom (n − p − 1). A very large F typically indicates that at least one predictor contributes meaningfully, but it does not identify which one. Pair this with coefficient tests or cross-validation when deciding features.
Start with a baseline model, record adjusted R² and RMSE, then add predictors in small batches. If adjusted R² rises and RMSE falls, the added features likely help. If adjusted R² drops, simplify or regularize. When comparing several candidates, rank by BIC for compact production models and by AIC when you prefer sensitivity to small fit gains. Always validate on held-out data. Document every run so stakeholders can review choices later clearly.
Provide n, p, and either R² or both RSS and TSS. With only R² you still get adjusted R². Add RSS to unlock RMSE, AIC, and BIC.
Adjusted R² applies a penalty for each predictor. If a new feature improves fit only slightly, the penalty can outweigh the gain, producing a lower adjusted R² than the raw R².
Yes. If RSS exceeds TSS, the model fits worse than predicting the mean, yielding negative R². The calculator reports the value and still computes adjusted R² when degrees of freedom allow.
Use them only for models fitted to the same dataset and target. Lower is better. BIC penalizes complexity more strongly, often favoring simpler models, especially when n is large.
No. It indicates the model improves over an intercept-only baseline, not that assumptions hold or predictions generalize. Check residual diagnostics and validate with cross‑validation or a test set.
RMSE requires RSS. If you only enter R² without TSS, RSS cannot be derived. Enter RSS directly, or enter TSS alongside R² so the calculator can infer RSS.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.