Analyze candidate models with robust selection criteria and diagnostics. Balance accuracy and complexity with confidence. See rankings, formulas, examples, exports, and graphs in seconds.
Enter the current model details first. Then optionally add comparison models using one line per model in CSV style: Name,RSS,k.
-2 log(L) = n × [ln(2π) + 1 + ln(RSS / n)]
This form assumes normally distributed residuals and maximum likelihood estimation based on the residual sum of squares.
AIC = -2 log(L) + 2k
AICc = AIC + [2k(k + 1)] / (n - k - 1)
BIC = -2 log(L) + k ln(n)
HQIC = -2 log(L) + 2k ln(ln(n))
GIC = -2 log(L) + λk
MSE = RSS / n
RMSE = √MSE
R² = 1 - RSS / TSS
Adjusted R² = 1 - (1 - R²)(n - 1)/(n - p - 1)
Mallows' Cp = RSS / σ²full - (n - 2k)
Use p as the predictor count for adjusted R². Use k as the penalty parameter count for information criteria.
| Model | n | p | k | RSS | TSS | σ² full | Typical use |
|---|---|---|---|---|---|---|---|
| Linear Model | 120 | 5 | 6 | 84.6 | 215.0 | 0.78 | Baseline fit |
| Reduced Model | 120 | 3 | 4 | 92.3 | 215.0 | 0.78 | Lower complexity |
| Spline Model | 120 | 7 | 8 | 79.8 | 215.0 | 0.78 | Flexible nonlinear fit |
| Regularized Model | 120 | 4 | 5 | 81.4 | 215.0 | 0.78 | Balanced option |
It helps you compare competing mathematical or statistical models using information criteria, fit measures, and optional diagnostics. Lower criterion values usually indicate a better tradeoff between fit and complexity.
Use AIC when predictive performance matters most. Use BIC when you want a stronger penalty for complexity, especially with larger samples and more conservative model choice.
AICc needs the denominator n - k - 1 to stay positive. If your sample is too small relative to the number of parameters, the correction becomes undefined.
Use p as the predictor count for adjusted R². Use k as the penalty parameter count in AIC, BIC, HQIC, and GIC formulas. They may match, but not always.
No. A lower RSS improves fit, but overly flexible models may overfit. Information criteria add complexity penalties, which helps identify a more balanced model.
The weight is a normalized evidence score derived from the selected criterion difference. Larger weights suggest stronger relative support among the compared models.
Yes, as long as the models are fitted to the same response data and sample size. Comparing unrelated datasets or differently defined likelihoods is not recommended.
GIC lets you control the penalty strength using a custom lambda. It is useful when you want a tailored balance between fit and simplicity beyond the standard named criteria.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.