Calculator
Example Data Table
Use these sample inputs to test the calculator quickly.
| Scenario | Model | Observed values | Suggested prior | Simulations |
|---|---|---|---|---|
| Continuous measurement | Normal | 2.1, 1.9, 2.3, 2.0, 2.4, 1.8, 2.2, 2.5 | μ₀=0, κ₀=1, α₀=2, β₀=2 | 2000 |
| Event counts | Poisson | 3, 4, 2, 5, 3, 1, 6 | a₀=1, b₀=1 | 3000 |
| Success counts | Binomial | 7, 5, 6, 8, 4 | α₀=1, β₀=1, n=10 | 3000 |
Formula Used
Posterior predictive checks compare observed data to replicated data generated from the posterior.
- Posterior predictive:
y_rep ~ p(y | θ)withθ ~ p(θ | y_obs). - Statistic p-value:
p = P(T(y_rep) ≥ T(y_obs) | y_obs)(tail depends on your choice). - Discrepancy:
D = Σ (y − E[y|θ])² / Var[y|θ], thenp = P(Drep ≥ Dobs).
This tool uses conjugate updates: Normal–Inverse-Gamma for Normal data, Gamma for Poisson rates, and Beta for Binomial probabilities.
How to Use This Calculator
- Select the model that matches your observed data type.
- Enter observed values as a comma-separated list.
- Set prior parameters; keep them weak if uncertain.
- Choose statistics and a p-value tail option.
- Run the check and review p-values and intervals.
- Export CSV for records or share the PDF report.
FAQs
1) What is a posterior predictive check?
It compares your observed data to data simulated from the fitted Bayesian model. If many simulated summaries look unlike the observed summaries, model mismatch is likely.
2) How should I interpret the p-values?
Values near 0.5 mean the observed statistic is typical under the model. Very small or very large values can indicate misfit for that statistic, especially if repeated across several checks.
3) Why check multiple statistics?
Different statistics detect different failures. Mean checks location, standard deviation checks spread, and extremes check tails. Using several helps you see where the model fits well or struggles.
4) What does the discrepancy check add?
It conditions each comparison on sampled parameters, using expected value and variance under those parameters. This can highlight dispersion issues and provides a complementary p-value.
5) What priors should I use?
If you lack strong knowledge, choose weakly informative priors that still rule out impossible values. Then test sensitivity by rerunning with broader or narrower priors and comparing results.
6) Can I use variable trials in the binomial model?
Yes. Choose the pairs format and enter entries like 7/10 or 3/8. The calculator will update the posterior using total successes and total trials.
7) Why do results change slightly between runs?
The check uses random simulation. Increasing simulations reduces randomness. Set a seed if you want repeatable outputs for the same inputs.
8) Does a good check prove the model is correct?
No. Passing checks only means the model can reproduce selected features of the data. Add domain reasoning, alternative models, and additional checks before relying on conclusions.