Bernoulli Distribution Calculator

Model a two-outcome experiment with probability p. Explore probability, moments, and estimation. Export clean reports for study and sharing.

Calculator Inputs

Use a value from 0 to 1.
Enter p between 0 and 1.
Choose which probability to evaluate.
Select 0 or 1.
Sample mode uses the 0/1 list below.
Used for p̂, 95% Wilson CI, and log-likelihood.

Example Data Table

Scenario p P(X=1) P(X=0) E[X] Var[X]
Fair coin heads0.500.500.500.500.25
Quality pass rate0.920.920.080.920.0736
Login success0.700.700.300.700.21
Each row models one trial with two outcomes.

Formula Used

Probability mass function
P(X=x) = px(1−p)1−x,  x∈{0,1}
So P(X=1)=p and P(X=0)=1−p.
Moments and shape
E[X]=p,  Var[X]=p(1−p),  SD=√(p(1−p))
Skew=(1−2p)/SD,  ExcessKurt=(1−6p(1−p))/Var
Sample estimation (optional)
p̂ = k/n,  log L(p) = k ln p + (n−k) ln(1−p)
Wilson interval gives a stable 95% confidence range for p.

How to Use This Calculator

  1. Enter p, the probability of success in one trial.
  2. Select x as 0 or 1 to evaluate P(X=x).
  3. Optionally paste a 0/1 sample to estimate .
  4. Press Submit to view results above this form.
  5. Use the CSV or PDF buttons to export the report.

Professional Notes

Binary trials and measurable risk

A Bernoulli model represents a single trial with two outcomes, coded as 1 and 0. In operations, the parameter p is the observed rate of success, pass, or presence. When p moves from 0.50 to 0.90, the expected value increases by 0.40, while variance drops from 0.25 to 0.09, indicating more predictable performance.

Probability outputs that support decisions

This calculator reports P(X=1)=p and P(X=0)=1−p, plus the CDF. For example, with p=0.70, the probability of success is 0.70, failure is 0.30, and the CDF at x=0 equals 0.30. These values map directly to expected counts in repeated trials, such as 70 successes per 100 attempts.

Moments as performance indicators

Mean and variance summarize the distribution without listing outcomes. The standard deviation √(p(1−p)) is largest at p=0.50 (0.50) and shrinks toward 0 as p approaches 0 or 1. In reliability tracking, this helps explain why mid-range success rates produce the widest fluctuation in short samples.

Uncertainty quantified in bits

Entropy measures uncertainty: it is 1 bit at p=0.50, about 0.469 bits at p=0.90, and 0 bits at p=0 or 1. The plot highlights this curve and marks your current p, making it easy to compare “how uncertain” different processes are even when means differ.

Sample estimation and confidence bounds

When you provide a 0/1 sample, the tool computes p̂=k/n, where k is the number of ones. With n=50 and k=40, p̂=0.80. The 95% Wilson interval gives a stable range near the boundaries, often tighter than simple normal approximations, and is useful for reporting performance with uncertainty.

Likelihood for model comparison

The log-likelihood k ln p+(n−k) ln(1−p) scores how well a proposed p explains observed data. For fixed data, higher log-likelihood indicates better fit. This is helpful when comparing candidate p values, validating assumptions, and selecting thresholds for monitoring changes over time.

FAQs

1) What does p represent here?

p is the probability that a single trial results in outcome 1. It can represent success rate, pass probability, or event occurrence in any two-outcome setting.

2) Why can x only be 0 or 1?

The Bernoulli distribution is defined on two outcomes only. If your variable can take more values, consider a binomial model for counts or a categorical model for multiple classes.

3) What is the difference between PMF and CDF?

The PMF gives P(X=x) for a specific outcome. The CDF gives P(X≤x). For Bernoulli, the CDF at x=0 equals 1−p and at x=1 equals 1.

4) How should I interpret variance?

Variance p(1−p) measures spread. It is highest at p=0.5 and decreases toward zero as p approaches 0 or 1, meaning outcomes become more predictable.

5) What is the Wilson confidence interval used for?

It estimates an uncertainty range for p based on a 0/1 sample. It behaves well for small samples and for p values close to 0 or 1.

6) Why does log-likelihood matter?

Log-likelihood summarizes how plausible your observed sample is under a proposed p. It supports comparing models, monitoring drift in success rates, and selecting parameters that best fit the data.

Related Calculators

discrete distribution calculatorprobability plot calculatorrandom variable probability calculatorbivariate distribution calculator

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.