Calculator
Enter priors and either log marginal likelihoods or raw marginal likelihoods. The calculator normalizes priors automatically and ranks all submitted models.
Formula used
Posterior model probability
P(Mᵢ | D) = [P(D | Mᵢ) × P(Mᵢ)] / Σⱼ [P(D | Mⱼ) × P(Mⱼ)]
Stable log form
The calculator computes log posterior score = log(normalized prior) + log evidence
and applies the log-sum-exp trick to avoid underflow.
Bayes factor versus best evidence model
BFᵢ,best = exp(log evidenceᵢ − max log evidence)
helps compare evidence strength independently of priors.
Lift versus prior
Lift = posterior / normalized prior
shows whether the data increased or decreased support for a model.
How to use this calculator
1. Choose evidence mode
Select log marginal likelihood when model evidence is reported on a log scale. Select raw marginal likelihood when you already have positive evidence values.
2. Enter each candidate model
Provide a model name, prior probability, and evidence value. Priors do not need to sum to one because the calculator normalizes them automatically.
3. Calculate and review results
Submit the form to see the result panel above the calculator. Review rankings, posterior percentages, odds, lift values, and the Plotly chart.
4. Export your output
Use the CSV button for spreadsheet work or the PDF button for reports, documentation, and model-comparison summaries.
Example data table
This example matches the default values loaded into the calculator.
| Model | Prior | Log Marginal Likelihood | Posterior Probability |
|---|---|---|---|
| Model A | 0.40 | -120.50 | 66.94% |
| Model B | 0.30 | -121.10 | 27.55% |
| Model C | 0.20 | -122.40 | 5.01% |
| Model D | 0.10 | -124.00 | 0.51% |
FAQs
1. What does posterior model probability measure?
It measures each model’s probability after combining prior belief with observed data evidence. Higher values indicate stronger overall support relative to the competing models entered together.
2. Do my prior probabilities need to sum to one?
No. The calculator normalizes all positive prior inputs before computing posterior probabilities, so only the relative prior weights matter.
3. Why is log evidence often better?
Log evidence is numerically stable when marginal likelihoods are tiny. It reduces underflow risk and is standard in Bayesian model comparison workflows.
4. Can the best evidence model differ from the best posterior model?
Yes. Stronger priors can change the final posterior ranking, especially when evidence differences are small. The summary panel shows both winners.
5. What does lift versus prior mean?
Lift compares posterior weight with normalized prior weight. A lift above one means the data increased support for that model relative to its prior standing.
6. How many models can I compare?
You can add multiple rows dynamically. For readable output, most practical comparisons work best when the model list stays concise and well justified.
7. What does a very small posterior mean?
It means the model receives little combined support once priors and evidence are normalized against stronger alternatives in the same comparison set.
8. How should I read the dominance ratio?
The dominance ratio is the top posterior divided by the second-best posterior. Larger values suggest a clearer lead for the winning model.