Calculator Inputs
Example Data Table
| Model | Sample Size (n) | Parameters (k) | RSS | AIC | BIC |
|---|---|---|---|---|---|
| Linear A | 120 | 6 | 248.750 | 96.030 | 112.755 |
| Quadratic B | 120 | 8 | 230.410 | 89.889 | 112.189 |
| Interaction C | 120 | 10 | 227.960 | 92.604 | 120.479 |
In this example, Quadratic B has the lowest AIC and would be preferred on that criterion.
Formula Used
AIC: AIC = n × ln(RSS / n) + 2k
AICc: AICc = AIC + [2k(k + 1) / (n - k - 1)]
BIC: BIC = n × ln(RSS / n) + k × ln(n)
Log Likelihood: ln(L) = -(n / 2) × [ln(2π) + 1 + ln(RSS / n)]
MSE: MSE = RSS / n
RMSE: RMSE = √MSE
How to Use This Calculator
- Enter a descriptive model name for easier reporting.
- Provide the sample size used to fit the regression.
- Enter the total number of estimated parameters, including intercept and variance terms when appropriate.
- Input the residual sum of squares from your fitted model.
- Optionally enter R² and adjusted R² for extra context.
- Click the calculate button to display AIC, AICc, BIC, log likelihood, MSE, and RMSE above the form.
- Use the CSV and PDF buttons to save the calculated outputs.
- Compare lower information criteria across models fitted to the same response data.
FAQs
1. What does AIC measure in regression?
AIC measures the tradeoff between model fit and complexity. It rewards lower residual error but penalizes extra parameters, helping compare competing regression models on the same dataset.
2. Is a lower AIC always better?
A lower AIC is better only when comparing models estimated on the same response variable and identical observations. It is a relative criterion, not an absolute quality score.
3. When should I use AICc instead of AIC?
Use AICc when sample size is not much larger than the number of parameters. It adds a stronger small sample correction and reduces overfitting risk.
4. How is BIC different from AIC?
BIC penalizes model complexity more strongly than AIC, especially as sample size grows. It often favors simpler models when several candidates fit similarly well.
5. Can I compare models with different datasets?
No. Information criteria should be compared only across models fitted to the same response data and sample observations. Different datasets break the comparison logic.
6. What counts as a parameter in k?
Count every estimated coefficient, including the intercept. In many likelihood-based treatments, the variance term is also included, so stay consistent across model comparisons.
7. Why is AICc sometimes undefined?
AICc becomes undefined when n - k - 1 is zero or negative. This means the sample is too small relative to the number of estimated parameters.
8. Does AIC tell me if the model is statistically significant?
No. AIC compares model efficiency, not coefficient significance. Use hypothesis tests, confidence intervals, and residual diagnostics to judge statistical reliability and assumptions.