Model Selection Calculator

Analyze candidate models with robust selection criteria and diagnostics. Balance accuracy and complexity with confidence. See rankings, formulas, examples, exports, and graphs in seconds.

Calculator inputs

Enter the current model details first. Then optionally add comparison models using one line per model in CSV style: Name,RSS,k.

Each line uses: Model Name, RSS, k

Formula used

Gaussian log-likelihood from RSS

-2 log(L) = n × [ln(2π) + 1 + ln(RSS / n)]

This form assumes normally distributed residuals and maximum likelihood estimation based on the residual sum of squares.

Information criteria

AIC = -2 log(L) + 2k

AICc = AIC + [2k(k + 1)] / (n - k - 1)

BIC = -2 log(L) + k ln(n)

HQIC = -2 log(L) + 2k ln(ln(n))

GIC = -2 log(L) + λk

Additional diagnostics

MSE = RSS / n

RMSE = √MSE

R² = 1 - RSS / TSS

Adjusted R² = 1 - (1 - R²)(n - 1)/(n - p - 1)

Mallows' Cp = RSS / σ²full - (n - 2k)

Use p as the predictor count for adjusted R². Use k as the penalty parameter count for information criteria.

How to use this calculator

  1. Enter the current model name and sample size.
  2. Provide predictor count p and penalty parameter count k.
  3. Enter RSS for the current model.
  4. Add TSS if you want R² and adjusted R².
  5. Add full model error variance if you want Mallows' Cp.
  6. Choose the selection criterion you want to minimize.
  7. Set the GIC lambda if you plan to inspect GIC.
  8. Optionally add competing models using one line per model.
  9. Click the calculate button to see rankings above the form.
  10. Use the export buttons to download CSV or PDF reports.

Example data table

Model n p k RSS TSS σ² full Typical use
Linear Model 120 5 6 84.6 215.0 0.78 Baseline fit
Reduced Model 120 3 4 92.3 215.0 0.78 Lower complexity
Spline Model 120 7 8 79.8 215.0 0.78 Flexible nonlinear fit
Regularized Model 120 4 5 81.4 215.0 0.78 Balanced option

FAQs

1) What does this calculator help me choose?

It helps you compare competing mathematical or statistical models using information criteria, fit measures, and optional diagnostics. Lower criterion values usually indicate a better tradeoff between fit and complexity.

2) When should I use AIC instead of BIC?

Use AIC when predictive performance matters most. Use BIC when you want a stronger penalty for complexity, especially with larger samples and more conservative model choice.

3) Why can AICc show N/A?

AICc needs the denominator n - k - 1 to stay positive. If your sample is too small relative to the number of parameters, the correction becomes undefined.

4) What is the difference between p and k?

Use p as the predictor count for adjusted R². Use k as the penalty parameter count in AIC, BIC, HQIC, and GIC formulas. They may match, but not always.

5) Do lower RSS values always mean the best model?

No. A lower RSS improves fit, but overly flexible models may overfit. Information criteria add complexity penalties, which helps identify a more balanced model.

6) What does the model weight mean?

The weight is a normalized evidence score derived from the selected criterion difference. Larger weights suggest stronger relative support among the compared models.

7) Can I compare models with different structures?

Yes, as long as the models are fitted to the same response data and sample size. Comparing unrelated datasets or differently defined likelihoods is not recommended.

8) What is GIC useful for?

GIC lets you control the penalty strength using a custom lambda. It is useful when you want a tailored balance between fit and simplicity beyond the standard named criteria.

Related Calculators

Linear Regression CalculatorMultiple Regression CalculatorLogistic Regression CalculatorSimple Regression CalculatorPower Regression CalculatorLogarithmic Regression CalculatorR Squared CalculatorAdjusted R SquaredSlope Intercept CalculatorCorrelation Coefficient Calculator

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.