Know how well your model generalizes fast now. Adjust R-squared for variables and observations accurately. Get clean summaries downloadable files and transparent calculations instantly.
| Scenario | n | p | R² | Adjusted R² | Interpretation |
|---|---|---|---|---|---|
| Baseline model | 50 | 2 | 0.82 | 0.812 | Strong, limited complexity |
| Add one predictor | 50 | 3 | 0.83 | 0.819 | Improves after penalty |
| Add many predictors | 50 | 10 | 0.86 | 0.823 | Gain is smaller than expected |
| Small sample risk | 18 | 6 | 0.80 | 0.691 | Penalty grows with low n |
Adjusted R² refines R² by penalizing extra predictors, making it more stable for comparing models trained on the same target. It uses sample size (n) and predictor count (p), so identical R² values can imply very different reliability. With n=50 and p=2, R²=0.82 becomes AdjR²≈0.812. If you keep n fixed and increase p to 10, the adjusted score falls to ≈0.823 even when R² rises to 0.86.
In least squares, R² almost never decreases when you add variables, even if they are weak. The adjustment rescales the unexplained fraction (1−R²) by (n−1)/(n−p−1). The penalty is mild when p is small relative to n, but it accelerates as p approaches n. This is why a small apparent gain, such as 0.82→0.83, can be meaningful with low p, yet unimpressive with many predictors.
Residual degrees of freedom are n−p−1. If n ≤ p+1, adjusted R² is undefined because the model has no remaining degrees of freedom to estimate error. Small samples amplify variance and can create optimistic R² values that collapse after adjustment. Example: n=18, p=6, R²=0.80 yields AdjR²≈0.691, signaling limited evidence and high model flexibility.
When you enter SSE and SST, the calculator derives R² = 1 − SSE/SST and then computes adjusted R². In data mode, SSE is computed from actual and predicted pairs, then RMSE = √(SSE/n) and MAE = (1/n)·Σ|y−ŷ| are added. RMSE highlights large misses, while MAE is more robust to outliers. Tracking both helps distinguish “few big errors” from “many small errors.”
Use adjusted R² to screen feature additions, not as the only validation signal. Prefer comparing models on the same evaluation split, and confirm improvements with cross-validation or a holdout set. Negative adjusted values are possible when the model underperforms a mean baseline. AIC and BIC appear when SSE is positive; they compare fit while penalizing parameter count (k), helping rank models under similar data assumptions.
It estimates explained variance while penalizing added predictors. The score is most useful for comparing models fitted on the same dataset and target, because it accounts for both sample size and feature count.
Yes. If the model fits worse than predicting the mean, R² can be below zero and adjusted R² can drop further. Negative values are a warning sign to revisit features, leakage, or evaluation setup.
Enter the number of independent variables, excluding the intercept. For one-hot encoded categories, count the created columns. For interactions or polynomial terms, count each added term as a predictor.
Use data mode when you have actual and predicted values. It computes SSE, SST, R², adjusted R², RMSE, and MAE in one run, and lets you validate calculations with a preview table.
They require a positive SSE and a valid parameter count to evaluate the log-likelihood approximation. If SSE is missing, zero, or not computable from inputs, the criteria are withheld to avoid misleading results.
Keep the same target, dataset split, and evaluation process. Compare adjusted R² alongside RMSE or MAE. If models differ greatly in complexity, also review AIC/BIC and confirm with cross-validation.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.