Enter survey data
Use campaign totals and quality flags to estimate usable responses, benchmark survey health, and understand whether findings are strong enough to report.
Example data table
This sample illustrates how campaign survey quality can vary by project and cleaning rules.
| Campaign | Invited | Completed | Attention Fail | Speeders | Duplicates | Overlap | Estimated Valid | Validity Index |
|---|---|---|---|---|---|---|---|---|
| Spring Brand Lift | 1500 | 480 | 22 | 31 | 9 | 36 | 386 | 89.06% |
| Product Awareness Pulse | 1200 | 355 | 34 | 42 | 14 | 26 | 260 | 78.44% |
| Customer Message Test | 950 | 310 | 10 | 13 | 4 | 12 | 281 | 92.18% |
Formula used
1) Completion and participation
Start Rate = (Started Responses ÷ Invited Sample) × 100
Completion Rate = (Completed Responses ÷ Started Responses) × 100
Participation Rate = (Completed Responses ÷ Invited Sample) × 100
2) Quality exclusions
Total Flags = Attention Failures + Logic Failures + Speeders + Straightliners + Duplicates + Critical Missing + Low-Quality Open Ends
Unique Flagged Cases = Total Flags − Overlap Adjustment
Estimated Valid Responses = Completed Responses − Unique Flagged Cases
3) Diagnostic pass rates
Attention Pass Rate = ((Completed − Attention Failures) ÷ Completed) × 100
Logic Pass Rate = ((Completed − Logic Failures) ÷ Completed) × 100
Time Quality Rate = ((Completed − Speeders) ÷ Completed) × 100
Straightline Pass Rate = ((Completed − Straightliners) ÷ Completed) × 100
Uniqueness Rate = ((Completed − Duplicates) ÷ Completed) × 100
Missingness Pass Rate = ((Completed − Critical Missing) ÷ Completed) × 100
Open-End Quality Rate = ((Completed − Low-Quality Open Ends) ÷ Completed) × 100
4) Validity index
Validity Index = (Attention Pass × 0.20) + (Logic Pass × 0.18) + (Time Quality × 0.15) + (Straightline Pass × 0.12) + (Uniqueness × 0.15) + (Missingness Pass × 0.10) + (Open-End Quality × 0.10)
Why overlap matters: a single response can fail several rules. The overlap adjustment prevents double-counting those records in the final validity estimate.
How to use this calculator
- Enter the invited sample, started responses, and completed responses.
- Add counts for each quality issue, including attention failures, speeders, and duplicates.
- Enter an overlap adjustment when the same response appears in multiple flagged groups.
- Set your internal performance targets for completion, usable rate, and quality benchmarks.
- Submit the form to generate the validity index, usable response estimate, and benchmark table.
- Review the Plotly graph, then export results as CSV or PDF for reporting.
FAQs
1) What does this calculator measure?
It estimates survey quality by combining completion performance with data-cleaning signals such as attention failures, duplicates, speeders, straightliners, missing data, and logic conflicts.
2) Why is an overlap adjustment included?
One respondent can fail several checks at once. Overlap adjustment prevents double-counting those records, which keeps estimated valid responses more realistic.
3) What is a good validity index?
Many teams treat 80% or higher as strong, 70% to 79.99% as workable with review, and below 70% as a signal to audit the survey data.
4) Should completion rate always be high?
Not always. Longer surveys, harder target audiences, or stricter screening often reduce completion. Compare completion against your normal campaign benchmarks and questionnaire length.
5) Can I use this for panel and intercept surveys?
Yes. It works for panel-based, website intercept, email, and community surveys as long as you can estimate completed responses and cleaning flags.
6) Are all quality checks weighted equally?
No. This version gives more weight to attention and logic consistency because they often affect core response credibility more directly.
7) What if I do not use open-ended questions?
Set low-quality open-end responses to zero. The calculator still works and simply treats that quality dimension as fully passed.
8) When should I hold reporting?
Pause reporting when usable rate is weak, validity index misses targets, or several benchmark checks fail. That usually means cleaning rules need review first.