Calculator Inputs
Example Data Table
Use these sample values to test evidence calculations and compare approximation methods.
| Model | Log-Likelihood | Log Prior | Parameters | Covariance Determinant | Sample Size | Alternative Log Evidence |
|---|---|---|---|---|---|---|
| Model A | -120.5 | -2.8 | 4 | 0.0036 | 250 | -118.9 |
| Model B | -118.1 | -3.6 | 6 | 0.0012 | 250 | -121.4 |
| Model C | -125.0 | -1.9 | 3 | 0.0095 | 180 | -126.8 |
Formula Used
The calculator supports two common approximations for Bayesian model evidence. The Laplace approximation expands the posterior around its mode and uses local curvature information.
Laplace approximation:
log p(D|M) ≈ log p(D|θ̂,M) + log p(θ̂|M) + (k/2) log(2π) + (1/2) log|Σ|
Here, θ̂ is the best-fit parameter vector, k is the number of parameters, and Σ is the posterior covariance matrix near the mode.
BIC-style approximation:
log p(D|M) ≈ log p(D|θ̂,M) − (k/2) log(n)
This second form is faster and useful when only sample size and parameter count are available, though it ignores explicit prior density and covariance structure.
Bayes factor:
BF = exp(log evidenceselected − log evidencealternative)
How to Use This Calculator
- Enter the fitted model name for reporting clarity.
- Provide the log-likelihood evaluated at the posterior mode or best-fit estimate.
- Enter the log prior density at that same parameter point.
- Supply the number of free parameters in the model.
- For Laplace mode, enter the determinant of the posterior covariance matrix.
- Enter the sample size so the BIC approximation can also be computed.
- Optionally enter another model’s log evidence to obtain a Bayes factor and posterior model probability.
- Click Calculate Evidence to show the result directly beneath the page header.
Why Model Evidence Matters
Bayesian model evidence balances fit quality and model complexity within one probabilistic quantity. Unlike simple goodness-of-fit metrics, it rewards predictive adequacy while penalizing unsupported flexibility. This makes it valuable when comparing competing structures, priors, or feature sets under uncertainty.
Strong evidence for one model does not guarantee practical usefulness, but it does provide a principled ranking criterion. In real projects, analysts often inspect evidence alongside posterior predictive checks, residual structure, calibration, and domain constraints before choosing a final model.
FAQs
1. What is Bayesian model evidence?
It is the probability of observed data under a model, integrated over all parameter values using the prior distribution.
2. Why use log evidence instead of raw evidence?
Evidence values can become extremely small. Log values improve numerical stability and make model comparisons easier to interpret.
3. When should I use the Laplace approximation?
Use it when you know the local covariance near the posterior mode and want a richer approximation than BIC alone.
4. What does the covariance determinant represent?
It summarizes local posterior spread across parameters. Larger values imply a broader parameter region around the fitted mode.
5. What does the Bayes factor tell me?
It compares relative evidence between two models. Values above one favor the selected model, while smaller values favor the alternative.
6. Is BIC the same as exact Bayesian evidence?
No. BIC is a large-sample approximation and may differ materially when priors are informative or sample sizes are modest.
7. Can I use this for non-normal posteriors?
You can explore it, but the Laplace approximation works best when the posterior is reasonably smooth and locally Gaussian.