Analyze alpha, false positives, and family-wise risk quickly. Review corrections for many simultaneous tests easily. Make safer inference choices using practical metrics and guidance.
Enter your testing setup below. The form uses a 3-column layout on large screens, 2 columns on smaller screens, and 1 column on mobile.
Type I Error = α
Alpha is the probability of rejecting a true null hypothesis in one test.
Confidence Level = 1 − α
This shows the complement of the false positive rate for a single test.
FWER = 1 − (1 − α)m
Here, m is the total number of tests. It estimates the probability of at least one false positive among all tests.
Expected False Positives = α × m0
Here, m0 is the number of hypotheses that are truly null.
Adjusted α = α / m
This conservative approach divides the original alpha by the number of tests.
Adjusted α = 1 − (1 − α)1/m
This is slightly less conservative than Bonferroni when tests are independent.
Empirical Type I Error = False Positives / Null Trials
Use this when you have repeated null-only simulations or historical calibration results.
Step 1: Enter your alpha level. This is your allowed false positive rate for one test.
Step 2: Add the total number of tests performed in your experiment, feature screen, or model comparison pipeline.
Step 3: Estimate how many tests are truly null. If unsure, use a cautious value close to total tests.
Step 4: Enter observed rejections to estimate how large the false discovery share might be.
Step 5: Choose a correction method. Bonferroni is stricter. Šidák is slightly less strict under independence.
Step 6: Optionally add simulation counts for empirical Type I error. Then submit to view metrics above the form.
Example scenario: α = 0.05, total tests = 20, true null tests = 18, observed rejections = 6, Bonferroni selected.
| Metric | Example Value |
|---|---|
| Per-test Type I error | 5.00% |
| Confidence level | 95.00% |
| Family-wise error rate | 64.15% |
| At least one false positive among true nulls | 60.28% |
| Expected false positives | 0.900 |
| Bonferroni adjusted alpha | 0.002500 |
| Šidák adjusted alpha | 0.002561 |
| Estimated false discovery proportion | 15.00% |
A Type I error happens when you reject a null hypothesis that is actually true. It is also called a false positive.
Each additional test creates another chance for a false positive. Even if each test uses the same alpha, the combined risk rises.
Use Bonferroni when you want a simpler and more conservative correction. Šidák can be slightly less strict when tests are approximately independent.
No. It evaluates false positive risk from alpha, number of tests, and optional simulation counts. It does not estimate a p-value for raw data.
It is the number of hypotheses you believe are actually null in reality. This value affects expected false positives under your testing plan.
It is expected false positives divided by observed rejections. It gives a rough idea of how many findings might be false among significant results.
No. They are optional. Add them only when you have null-only simulations or repeated calibration runs and want empirical Type I error.
That depends on your field and cost of false positives. Smaller alpha values reduce risk but also make significance harder to achieve.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.