Calculator Inputs
Use the method selector, enter your values, then submit. Results appear above this form below the header.
Example Data Table
| Method | Inputs | Effective Sample Size | Design Effect / Factor | Interpretation |
|---|---|---|---|---|
| Weighted ESS | Weights: 1, 1, 1, 2, 2 | 4.4545 | 1.1224 | Unequal weights reduce information below the nominal count of 5. |
| Autocorrelation ESS | n = 100, rhos = 0.30, 0.20, 0.10 | 45.4545 | 2.2000 | Dependence lowers independent information to about 45.45 observations. |
| Cluster ESS | n = 240, average cluster = 6, ICC = 0.08 | 171.4286 | 1.4000 | Cluster similarity inflates variance and reduces usable precision. |
Formula Used
1) Weighted effective sample size
Use this when observations have unequal importance or survey weights.
This is the classic Kish approximation. It converts an unequal-weight sample into an equally weighted sample with comparable information.
2) Autocorrelation effective sample size
Use this when observations are correlated across time, iteration, or sequence.
The denominator is the integrated autocorrelation factor. Larger positive autocorrelation lowers ESS because repeated observations carry overlapping information.
3) Cluster design effective sample size
Use this when units are grouped inside clusters such as schools, clinics, or households.
Here m is average cluster size and ICC measures within-cluster similarity. Higher ICC means less independent information.
How to Use This Calculator
- Choose the correct method for your data structure.
- For weighted ESS, paste all sample weights.
- For autocorrelation ESS, enter nominal n and lag autocorrelations.
- For cluster ESS, enter nominal n, average cluster size, and ICC.
- Press the calculate button to generate the result.
- Review ESS, design effect, and information-retained metrics above the form.
- Use the CSV or PDF buttons to save your result.
FAQs
1) What is effective sample size?
Effective sample size estimates how many fully independent, equally informative observations your data represents after weighting, clustering, or autocorrelation reduces usable information.
2) Why can ESS be smaller than the raw sample size?
Unequal weights, repeated dependence, or cluster similarity make observations overlap in information. ESS shrinks because your data contains less independent signal than the nominal count suggests.
3) Can ESS ever be larger than n?
Yes. In autocorrelation settings with strong negative correlation, ESS can exceed the raw count. That means the sequence can cancel noise and behave more efficiently than independent draws.
4) Which method should I choose?
Use weighted ESS for unequal weights, autocorrelation ESS for time series or chains, and cluster ESS when observations are grouped and share an intraclass correlation.
5) Do weights need to sum to one?
No. The weighted formula works with raw nonnegative weights. It depends on relative imbalance, so multiplying every weight by the same constant leaves ESS unchanged.
6) What autocorrelations should I enter?
Enter the lag correlations you trust, usually beginning at lag one. Include only the lags used in your own truncation rule, diagnostic summary, or estimation workflow.
7) What does design effect mean?
Design effect shows how much variance increases relative to simple random sampling. ESS equals the nominal size divided by this variance inflation factor.
8) Is ESS enough for final inference?
No. ESS is a compact summary. Final inference should still use the correct survey, time-series, or Bayesian model for intervals, tests, and uncertainty estimates.