Example data table
Use the examples below to sanity-check your inputs before fielding.
| Scenario | Total Q | Open Q | Skip % | Method | Typical Most Likely |
|---|---|---|---|---|---|
| Quick brand pulse | 12 | 1 | 5 | Seconds per question | 2:00 – 3:30 |
| Product usage tracker | 28 | 3 | 15 | Seconds per question | 6:00 – 9:00 |
| Customer feedback deep-dive | 45 | 10 | 25 | Words per minute | 10:00 – 15:00 |
| Policy compliance assessment | 75 | 12 | 30 | Words per minute | 18:00 – 28:00 |
Calculation history
Your last 50 runs are saved locally in this session for exports.
| Timestamp | Method | Total Q | Open Q | Skip % | Break (min) | Complexity | Device | Min | Most | Max | P90 | Avg sec/Q |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| No runs yet. Submit the form to generate results. | ||||||||||||
Formula used
Percentiles shown are simple positions within the min–max range, designed for planning rather than strict distributional modeling.
How to use this calculator
- Enter total questions and open-ended count to reflect instrument structure.
- Estimate skip rate from routing and quotas, then add break minutes for long surveys.
- Choose a pace model. Use seconds-per-question for uniform items, or words-per-minute for text-heavy screens.
- Set complexity for grids, media, and cognitive effort, then pick the primary device.
- Press Submit to show results under the header. Export CSV or PDF for briefs and monitoring.
Why completion time is a key quality metric
Completion time influences dropout, straightlining risk, and sample cost. Field teams often target a median duration under ten minutes for general audiences, while specialized studies can run longer. A stable overall estimate helps set incentives, screeners, and quotas with fewer revisions. When expected time exceeds fifteen minutes, consider splitting modules or adding breaks.
How question mix changes expected duration
Closed items are usually faster because choices are visible and bounded. Open-ended items add typing, cognitive load, and re-reading. This calculator separates closed and open counts so you can model both. In many online panels, one open item can consume time comparable to three to five closed items, depending on instruction length and validation rules. Longer open prompts can amplify time variance and increase late-stage exits.
Routing, skips, and the effective question count
Branching reduces exposure, but it also increases navigation overhead and can create longer paths for some segments. The skip rate converts total questions into an effective count that reflects typical routing. Use a conservative skip estimate when quotas force respondents into long sections. For early planning, a ten to twenty percent skip rate is common in multi-path surveys. Review logics after launch, because rare paths can still impact service levels.
Device and complexity adjustments for real users
Mobile respondents face smaller screens and slower text entry, so time often rises even when the script is unchanged. Complexity captures grids, media playback, looping, and heavy validation. A complexity factor above 1.15 is reasonable for matrix-heavy designs, while 0.90 fits short, simple flows. Pair device and complexity to avoid underestimating peak durations. If you translate languages, retest because reading speed and layout can shift.
Using ranges and percentiles for planning
Respondent speed varies, so a single number can mislead scheduling. The variability setting produces a practical min and max range, then derives planning percentiles inside that range. Use the most likely time for dashboards, P75 for staffing, and P90 for strict time limits. Export results to CSV for scenario comparisons and stakeholder sign-off. Treat extreme speeders as a quality signal, not a time goal.
FAQs
1) Which method should I choose?
Use seconds-per-question when items are uniform and stable. Use words-per-minute when stems, help text, or compliance language drives reading time. If unsure, run both and compare.
2) What is a good default for closed-item seconds?
For standard single-choice items, 10–14 seconds is a practical starting point. Increase it for grids, multi-selects, or images. Calibrate with pilot logs when available.
3) How should I set the skip rate?
Estimate the percent of items an average respondent will not see due to routing. If quotas push many respondents into long paths, lower the skip rate to stay conservative.
4) Why does mobile increase time?
Mobile entry is slower, scrolling increases, and attention shifts more often. Even when reading speed is similar, interaction overhead rises. The device factor accounts for this practical friction.
5) Are the percentiles statistically exact?
They are planning approximations derived from the min–max range, not a fitted distribution. Use them to communicate risk and scheduling buffers, then refine with field timing data.
6) How do I validate my estimate?
Pilot with a small sample, record timestamps, and compare the observed median to the calculator’s most likely value. Adjust seconds, WPM, and complexity until the estimate matches reality.