Test small-study effects with robust bias diagnostics. Compare intercepts, ranks, z scores, patterns, and symmetry. Turn study inputs into clearer meta-analysis evidence checks today.
Enter one study per line using either Study, Effect, SE or Effect, SE. Tabs, commas, semicolons, and pipes are accepted.
| Study | Effect | SE | Interpretation note |
|---|---|---|---|
| Study A | 0.42 | 0.10 | Moderate positive estimate with strong precision. |
| Study B | 0.31 | 0.12 | Smaller effect with acceptable precision. |
| Study C | 0.27 | 0.11 | Smaller-study result close to pooled center. |
| Study D | 0.55 | 0.14 | Larger observed effect and lower precision. |
| Study E | 0.18 | 0.09 | Low effect but relatively stable error. |
| Study F | 0.61 | 0.15 | Potential outlier with wider uncertainty. |
| Study G | 0.35 | 0.10 | Balanced result near the central trend. |
| Study H | 0.49 | 0.13 | Useful for checking funnel symmetry. |
These calculations are practical screening formulas. They help identify patterns that may warrant deeper meta-analytic review.
It screens for possible publication bias or small-study effects in meta-analysis data. It combines Egger regression, Begg rank correlation, fail-safe estimates, heterogeneity output, and a funnel plot.
No. A significant intercept suggests asymmetry, but asymmetry can also come from heterogeneity, selective outcome reporting, poor study quality, chance, or effect modification.
Begg's test uses rank correlation, so it gives a second perspective. Agreement between tests can strengthen concern, while disagreement suggests caution and closer inspection.
At least three valid studies are required here, but asymmetry tests are usually more stable with larger sets. With fewer than 10 studies, statistical power can be weak.
Use whichever you have, but match the selector first. If you choose variance, the calculator converts it to standard error internally using the square root.
It estimates how many missing null studies would be needed to move the combined result toward non-significance under the selected critical threshold. Larger values suggest more robustness.
It answers a practical question: how many missing studies would shift the pooled effect to a chosen trivial value, given an assumed mean effect for those missing studies.
Yes, as long as each study has a comparable effect estimate and its sampling error. The tool labels the metric using your chosen effect name.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.