Turn raw CV volume into actionable shortlist insights. See rates, flags, and consistency checks instantly. Download CSV or PDF, then share with stakeholders fast.
| Role | CVs Received | Shortlisted | Rejected | Pending | Shortlist Rate |
|---|---|---|---|---|---|
| Customer Support Associate | 220 | 28 | 160 | 20 | 12.73% |
| Backend Engineer | 140 | 12 | 110 | 10 | 8.57% |
| Finance Analyst | 90 | 14 | 60 | 8 | 15.56% |
A shortlist rate converts raw application volume into a measurable screening outcome. For example, 12 shortlisted out of 140 received equals 8.57%, or 1 in 11.7 applicants progressing. When you track this weekly, sudden jumps often reflect channel mix changes (more referrals) or requirement drift (broader criteria). Seasonal spikes can double volume while lowering rate. Use the same time window, seniority, and location to keep comparisons fair.
Screening yield focuses on reviewed decisions: shortlisted divided by (shortlisted + rejected). If 12 are shortlisted and 110 rejected, yield is 9.84%. Low yield can be healthy for niche roles, but it can also indicate poor job targeting. Pair yield with reviewer notes and calibration sessions to keep decisions consistent, and watch for reviewer-to-reviewer gaps larger than 5 percentage points.
Pending and withdrawn counts explain why rates move after the first report. If 10 of 140 are pending, the shortlist rate can rise once decisions are completed. Track withdrawn or duplicate CVs separately because they reduce the effective pool without reflecting screening quality. A clean breakdown improves stakeholder trust in the metric, and helps estimate how many reviews remain before a final funnel snapshot.
Targets work best as ranges, not a single number. A target of 10% with an actual 8.57% produces a −1.43 percentage-point variance. Use variance to prompt questions: is the job ad too broad, are must-have skills unclear, or is sourcing too narrow? Adjust targets after hiring cycles, not midstream, and document any rubric changes so trend lines stay meaningful.
For active hiring, publish a simple dashboard: total received, shortlisted, rejected, pending, and the resulting shortlist rate. Add a turnaround column, like “reviews completed within 48 hours,” because slower review can inflate pending counts. Over a month, compare sources: 220 CVs from job boards with 28 shortlisted (12.73%) versus 60 referrals with 18 shortlisted (30.00%) suggests where quality is strongest. Segment results by source and seniority to spot quality shifts. Export CSV for analysis, and PDF for leadership reviews and archives.
It measures the share of received CVs that move forward. Calculate shortlisted divided by total received, then multiply by 100. It helps compare funnel quality across roles, sources, and time periods.
Screening yield uses only completed decisions: shortlisted divided by (shortlisted + rejected). It removes “pending” noise and shows how selective reviewers are once they actually decide.
You can, but track them separately. Including them keeps intake reporting consistent, while the separate withdrawn/duplicate count explains why effective quality may look lower without blaming screening rules.
It is total received divided by shortlisted. A result of 1 in 12 means you shortlist about one candidate for every twelve applications, which is easy to communicate in hiring meetings.
Start with a baseline from recent hiring cycles for the same seniority and location. Set a range (for example, 8–12%) and review it after each quarter when sourcing mix or requirements change.
Pending reviews can later become shortlisted or rejected, shifting the final percentage. Report pending count alongside the rate, and consider freezing weekly snapshots to keep comparisons stable.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.