Calculator Inputs
Example Data Table
A sample weekly snapshot showing how inputs translate into a score.
| Week | Planned Hours | Actual Hours | Planned Tasks | Completed Tasks | Quality (%) | On-time (%) |
|---|---|---|---|---|---|---|
| Week 1 | 40 | 44 | 20 | 18 | 92 | 85 |
| Week 2 | 38 | 39 | 18 | 19 | 95 | 90 |
| Week 3 | 42 | 46 | 22 | 20 | 88 | 78 |
Formula Used
- Task Completion (%) = (Completed Tasks ÷ Planned Tasks) × 100, capped at 120.
- Time Efficiency (%) = (Planned Hours ÷ Actual Hours) × 100, capped at 120.
- Capacity Performance Score = 0.35×Completion + 0.25×Efficiency + 0.20×Quality + 0.20×Timeliness.
The weighting favors reliable delivery while still rewarding efficient execution and strong quality.
How to Use
- Choose a consistent time period (week, sprint, or month).
- Enter planned hours and the actual hours spent.
- Add planned tasks and completed tasks for that period.
- Provide a quality score and on-time delivery percentage.
- Click Calculate Score to see the result above.
- Export the breakdown as CSV or as a printable PDF.
Operational Notes
Why the score matters
A single productivity number can hide tradeoffs, so this score combines four signals. Completion reflects delivered scope, efficiency reflects time use, quality reflects rework risk, and timeliness reflects reliability. Together they provide a stable view of capacity performance across teams and periods. Because each input is expressed as a percentage, you can compare different roles, projects, and sprint lengths without changing units over time.
Interpreting completion and efficiency
Completion compares finished tasks to the plan, letting you detect under-commitment or missed scope early. Efficiency compares planned hours to actual hours, highlighting overtime, context switching, or estimation gaps. Both metrics are capped at 120% to reward over-delivery without letting extreme weeks distort trends. If efficiency falls below 80%, review interruptions, meeting load, and handoffs before assuming skill issues.
Quality and timeliness safeguards
High output is not healthy if defects rise or deadlines slip. The quality input can come from audits, peer review, defect-free rate, or customer acceptance. Timeliness can be measured as percent of items delivered by target date. Keeping these near 90% typically reduces downstream firefighting and stabilizes throughput. When quality is under 85%, track defect types, fix time, and review coverage to identify the fastest prevention lever.
Using the breakdown for planning
Use the weighted table to locate the limiting factor. If completion is low but efficiency is high, the plan may be too large or priorities are changing. If efficiency is low with high completion, people may be stretching hours. If quality drops, reduce parallel work and add review checkpoints. If timeliness drops, shorten work batches. Teams often improve fastest by fixing one constraint per period, then re-measuring after a single cycle.
Improvement actions and monitoring
Set a baseline using two to four periods, then track the score weekly or per sprint. Aim for steady gains of two to five points, not jumps. Pair the score with a short narrative: staffing changes, incident load, and scope shifts. This context helps you decide whether to rebalance workloads, refine estimates, or adjust process rules. A practical target for mature teams is 85 to 100 with low variance, which indicates predictable delivery without chronic overtime.
FAQs
What time period should I use?
Use any consistent window: a day, week, sprint, or month. Consistency matters more than length because the score compares percentages, making trend changes easier to interpret.
Can I score a team instead of a person?
Yes. Aggregate planned hours, actual hours, planned tasks, and completed tasks for the whole team. For quality and timeliness, use team-wide rates from reviews, QA, or delivery tracking.
Why are completion and efficiency capped at 120%?
Capping prevents a single exceptional week from dominating the average. It still rewards over-delivery and high efficiency, while keeping comparisons fair when workloads or task sizing fluctuate.
How should I estimate the quality score?
Use a measurable proxy: defect-free percentage, acceptance rate, review pass rate, or customer satisfaction for delivered work. Keep the method consistent and document it beside your tracking notes.
What does a low score usually mean?
Low scores often come from missed scope, excessive hours, poor quality, or late delivery. Use the component table to find which metric is pulling the score down, then address that constraint first.
How can I improve the score without overtime?
Reduce interruptions, limit work in progress, clarify priorities, and improve estimating. Add lightweight review steps to prevent rework, and break work into smaller batches to raise on-time delivery.