Calculator Form
Example Data Table
| Baseline Rate | MDE | Confidence | Power | Variants | Typical Use |
|---|---|---|---|---|---|
| 5% | 10% relative | 95% | 80% | 2 | Simple landing page test |
| 8% | 1 percentage point | 95% | 90% | 3 | Control against two offers |
| 15% | 7% relative | 90% | 80% | 4 | Fast directional campaign test |
Formula Used
This calculator uses the normal approximation for two independent conversion proportions.
n = ((Zα × √(2p̄(1 - p̄)) + Zβ × √(p1(1 - p1) + p2(1 - p2)))²) / (p2 - p1)²
Here, p1 is the baseline conversion rate. p2 is the expected variant rate. p̄ is the average of p1 and p2. Zα comes from confidence. Zβ comes from statistical power. The final sample is multiplied by the number of variants. It is then increased for expected data loss.
How To Use This Calculator
- Enter the main Adobe Target success metric.
- Add the current baseline conversion rate.
- Enter the smallest useful effect you want to detect.
- Choose relative or absolute effect type.
- Select confidence, power, direction, and variants.
- Add available daily traffic and expected data loss.
- Submit the form and review the result above.
- Download the CSV or PDF report for planning records.
Plan Better Adobe Target Experiments
A sample size plan protects an experiment from weak evidence. It sets a clear traffic goal before a campaign starts. This calculator estimates how many visitors each variant needs. It uses baseline conversion, desired lift, confidence, and power. These inputs help teams avoid tests that stop too early.
Why Sample Size Matters
Small tests can look exciting by chance. Large tests can waste traffic when the effect is obvious. A balanced plan gives a better middle path. Adobe Target users often compare one control with several experiences. Each added experience divides traffic. That split can increase the required calendar time.
Useful Planning Inputs
Start with a realistic baseline conversion rate. Use recent analytics for the same page, audience, and goal. Then choose the minimum detectable effect. This is the smallest lift worth acting on. A tiny lift needs more visitors. A larger lift needs fewer visitors. Choose confidence and power with care. Higher values reduce risk, but they increase sample size.
Reading The Result
The result shows visitors per variant, total visitors, and estimated days. The calculator also adjusts for expected data loss. Use this when consent rules, bot filtering, or tracking gaps may remove visits. If Bonferroni correction is enabled, the tool lowers alpha for multiple challenger comparisons. This makes the plan stricter when many experiences are tested against one control.
Physics Style Thinking
Physics work values controlled measurement. Experiment planning follows the same idea. Hold the goal steady. Define the signal. Estimate background noise. Then collect enough observations to detect the signal. Conversion rate noise comes from random visitor behavior. Sample size reduces that uncertainty.
Practical Advice
Do not change the main goal after launch. Avoid stopping the test because one day looks strong. Let the planned sample finish unless there is a clear operational issue. Review audience overlap, campaign priority, and traffic allocation before launch. Export the result as CSV or PDF for records. Share the assumptions with analysts, marketers, and developers. Good documentation makes later decisions easier. It also helps teams compare future tests with past campaigns. Keep assumptions visible. Revisit them when traffic, audience mix, or offer value changes. This keeps the launch plan honest and useful for everyone involved.
FAQs
What does this calculator estimate?
It estimates visitors needed per variant and total test traffic. It also estimates runtime by using available daily visitors and traffic allocation.
Can I use it for Adobe Target A/B tests?
Yes. It is built for planning Adobe Target conversion experiments where a control is compared with one or more experiences.
What is baseline conversion rate?
It is the current conversion rate for the metric you want to improve. Use recent and relevant analytics data.
What is minimum detectable effect?
It is the smallest improvement that would matter for your decision. Smaller effects usually require much larger sample sizes.
Should I choose relative or absolute lift?
Choose relative lift for percent improvement over baseline. Choose absolute lift for direct percentage point changes.
Why does power increase sample size?
Higher power lowers the chance of missing a real effect. That stronger protection needs more visitors.
What does Bonferroni correction do?
It adjusts alpha when several challengers are compared with one control. This reduces false positives but increases required sample size.
Can I stop the test early?
Avoid stopping early only because results look favorable. Early stopping can inflate error and create misleading decisions.