Conjugate Prior Calculator

Choose a model and enter prior parameters quickly. Add observations, then see posterior updates immediately. Download a neat report for your notes and work.

Calculator
Pick a conjugate pair, enter prior parameters, and describe observed data.
Tip: Add a seed for repeatable results.

Select the likelihood and its conjugate prior.
Common choices: 0.90, 0.95, 0.99.
More samples give smoother intervals and plots.
Use for reproducible sampling-based intervals.
Raw list uses comma/space-separated values.

Beta–Binomial inputs
alpha controls prior successes (pseudo-count).
beta controls prior failures (pseudo-count).
Number of Bernoulli trials observed.
Count of successes in the n trials.
Gamma–Poisson inputs
Higher alpha implies stronger prior evidence.
Rate beta is inverse scale (1/theta).
Sum of observed counts.
Time, area, or number of intervals.
Normal–Normal inputs
Belief about mu before seeing data.
Uncertainty of mu prior to data.
Assumed known observational noise.
Data fields depend on entry mode
Raw list uses a single input; summary uses n and xbar.
Normal–Inverse-Gamma inputs
kappa0 acts like a prior sample size for mu.
Data fields depend on entry mode
Raw list computes n, xbar, and SSD automatically.

Reset
Example data table
Conjugate pair Example inputs Posterior snapshot
Beta–Binomial alpha=2, beta=2, n=20, k=13 alpha'=15, beta'=9, mean=0.625
Gamma–Poisson shape=3, rate=1, sumy=18, t=10 shape'=21, rate'=11, mean~1.909
Normal–Normal mu0=0, tau0=2, sigma=1, n=5, xbar=0.8 mu'~0.667, tau'~0.408
Normal–Inv-Gamma mu0=0, k0=1, a0=2, b0=2, n=6 Posterior updates to mu, kappa, a, b
Use these as sanity checks before applying to your own problem.
Formula used
Beta–Binomial
Prior: p ~ Beta(alpha, beta), data: k successes out of n.
alpha' = alpha + k
beta' = beta + (n - k)
E[p | data] = alpha'/(alpha' + beta')
Gamma–Poisson
Prior: lambda ~ Gamma(alpha, beta) with rate beta, data: total count sum y over exposure t.
alpha' = alpha + sum y
beta' = beta + t
E[lambda | data] = alpha'/beta'
Normal–Normal (known sigma)
Prior: mu ~ N(mu0, tau0^2), likelihood: x ~ N(mu, sigma^2).
tau'^2 = 1/(1/tau0^2 + n/sigma^2)
mu' = tau'^2(mu0/tau0^2 + n xbar/sigma^2)
Normal–Inverse-Gamma
mu|sigma^2 ~ N(mu0, sigma^2/kappa0), sigma^2 ~ InvGamma(alpha0, beta0).
kappa' = kappa0 + n
mu' = (kappa0 mu0 + n xbar)/kappa'
alpha' = alpha0 + n/2
beta' = beta0 + 0.5 SSD + kappa0 n (xbar-mu0)^2/(2kappa')
This calculator estimates credible intervals by sampling from the posterior and taking quantiles, which remains accurate across all supported conjugate families.
How to use this calculator
  1. Select the conjugate pair that matches your likelihood.
  2. Enter prior parameters that encode your belief before data.
  3. Provide observed data using raw or summary entry mode.
  4. Choose a credible level and number of samples.
  5. Submit to view posterior parameters and summaries.
  6. Download CSV or PDF to keep a record.
For fastest results, start with default examples and adjust one parameter at a time.

Why conjugate updating is fast

This tool uses closed-form parameter updates, so computation is O(1) after you summarize the data. For Beta–Binomial you only need n and k; for Gamma–Poisson you need total count and exposure; for Normal families you need n, mean, and sometimes SSD. Sampling is used only for interval estimates and plots. For most models, about 15 arithmetic operations produce the posterior parameters.

Interpreting prior strength with pseudo-counts

In Beta–Binomial, alpha+beta behaves like prior trials. For example alpha=2 and beta=2 acts like 4 prior observations centered at 0.50. After n=20 and k=13, the posterior alpha'=15 and beta'=9 totals 24, so the data weight dominates while the prior still stabilizes small samples. If you set alpha=20 and beta=20, your prior adds 40 trials and shrinks extreme k/n values toward 0.50.

Rate learning in count processes

For Poisson counts, the posterior mean is (alpha+sum y)/(beta+t). With alpha=3, beta=1, sum y=18, t=10, the posterior mean is 21/11≈1.909 per unit exposure. Increasing t by 10 without new events halves the incremental evidence, reducing variance roughly as 1/(beta+t)^2. If sum y rises to 28 at the same t=10, the mean becomes 31/11≈2.818, showing sensitivity to event totals.

Known-noise mean estimation

With Normal–Normal, the posterior precision adds: 1/tau0^2 + n/sigma^2. If tau0=2, sigma=1, and n=5, then precisions are 0.25 and 5.00, giving tau'≈0.408. The posterior mean is a weighted average of mu0 and xbar, with weights proportional to these precisions. When n doubles from 5 to 10, tau' falls to about 0.316, narrowing uncertainty by roughly 23%.

Learning variance with Normal–Inverse-Gamma

When sigma^2 is unknown, alpha and beta track uncertainty. Each new point increases alpha by 0.5, and beta absorbs both within-sample spread (0.5·SSD) and mean shift (kappa0·n·(xbar−mu0)^2/(2·kappa')). This separates noise learning from mean learning in a numerically stable way. With alpha0=2 and n=6, alpha' becomes 5, so the posterior mean of sigma^2 exists and equals beta'/(alpha'−1).

Credible intervals and sampling quality

Intervals are computed from posterior draws using your credible level, such as 0.95 for the central 95%. With 40,000 samples, Monte Carlo error for a tail probability is about sqrt(p(1−p)/N)≈0.0011 at p=0.025. Increase samples for quantile stability and smoother Plotly density. A fixed seed makes charts and bounds match across reruns.

FAQs

1) What is a conjugate prior?

A conjugate prior is chosen so the posterior stays in the same distribution family after observing data, giving simple parameter updates and fast inference.

2) Why are my credible intervals slightly different each run?

Intervals are estimated from random posterior samples. Change the seed for a new draw, or set a fixed seed to reproduce the same bounds and Plotly shape.

3) Should I enter raw data or summary data?

Use raw data when you have a short list and want automatic summaries. Use summary mode for large datasets when you already know n, mean, and SSD.

4) What does “rate” mean in the Gamma model?

This calculator uses the rate parameterization: mean = alpha/beta. If you work with scale theta, convert using beta = 1/theta.

5) How do I interpret alpha and beta in Beta–Binomial?

Think of alpha-1 as prior successes and beta-1 as prior failures when both exceed 1. Their sum reflects prior strength, and their ratio sets the prior mean.

6) Can I use this for decision-making thresholds?

Yes. Use the posterior mean or the credible interval to compare against a target value, and export results so the chosen prior assumptions remain auditable.

Related Calculators

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.