Calculator Inputs
Example Data Table
| Scenario | p | P(X=1) | P(X=0) | E[X] | Var[X] |
|---|---|---|---|---|---|
| Fair coin heads | 0.50 | 0.50 | 0.50 | 0.50 | 0.25 |
| Quality pass rate | 0.92 | 0.92 | 0.08 | 0.92 | 0.0736 |
| Login success | 0.70 | 0.70 | 0.30 | 0.70 | 0.21 |
Formula Used
How to Use This Calculator
- Enter p, the probability of success in one trial.
- Select x as 0 or 1 to evaluate P(X=x).
- Optionally paste a 0/1 sample to estimate p̂.
- Press Submit to view results above this form.
- Use the CSV or PDF buttons to export the report.
Professional Notes
Binary trials and measurable risk
A Bernoulli model represents a single trial with two outcomes, coded as 1 and 0. In operations, the parameter p is the observed rate of success, pass, or presence. When p moves from 0.50 to 0.90, the expected value increases by 0.40, while variance drops from 0.25 to 0.09, indicating more predictable performance.
Probability outputs that support decisions
This calculator reports P(X=1)=p and P(X=0)=1−p, plus the CDF. For example, with p=0.70, the probability of success is 0.70, failure is 0.30, and the CDF at x=0 equals 0.30. These values map directly to expected counts in repeated trials, such as 70 successes per 100 attempts.
Moments as performance indicators
Mean and variance summarize the distribution without listing outcomes. The standard deviation √(p(1−p)) is largest at p=0.50 (0.50) and shrinks toward 0 as p approaches 0 or 1. In reliability tracking, this helps explain why mid-range success rates produce the widest fluctuation in short samples.
Uncertainty quantified in bits
Entropy measures uncertainty: it is 1 bit at p=0.50, about 0.469 bits at p=0.90, and 0 bits at p=0 or 1. The plot highlights this curve and marks your current p, making it easy to compare “how uncertain” different processes are even when means differ.
Sample estimation and confidence bounds
When you provide a 0/1 sample, the tool computes p̂=k/n, where k is the number of ones. With n=50 and k=40, p̂=0.80. The 95% Wilson interval gives a stable range near the boundaries, often tighter than simple normal approximations, and is useful for reporting performance with uncertainty.
Likelihood for model comparison
The log-likelihood k ln p+(n−k) ln(1−p) scores how well a proposed p explains observed data. For fixed data, higher log-likelihood indicates better fit. This is helpful when comparing candidate p values, validating assumptions, and selecting thresholds for monitoring changes over time.
FAQs
1) What does p represent here?
p is the probability that a single trial results in outcome 1. It can represent success rate, pass probability, or event occurrence in any two-outcome setting.
2) Why can x only be 0 or 1?
The Bernoulli distribution is defined on two outcomes only. If your variable can take more values, consider a binomial model for counts or a categorical model for multiple classes.
3) What is the difference between PMF and CDF?
The PMF gives P(X=x) for a specific outcome. The CDF gives P(X≤x). For Bernoulli, the CDF at x=0 equals 1−p and at x=1 equals 1.
4) How should I interpret variance?
Variance p(1−p) measures spread. It is highest at p=0.5 and decreases toward zero as p approaches 0 or 1, meaning outcomes become more predictable.
5) What is the Wilson confidence interval used for?
It estimates an uncertainty range for p based on a 0/1 sample. It behaves well for small samples and for p values close to 0 or 1.
6) Why does log-likelihood matter?
Log-likelihood summarizes how plausible your observed sample is under a proposed p. It supports comparing models, monitoring drift in success rates, and selecting parameters that best fit the data.