Team Performance Index Calculator

Convert weekly metrics into one clear performance index. Customize inputs, weights, and targets for accuracy. Export reports, compare periods, and coach improvements together confidently.

Enter team metrics

Use consistent units for each period.
Used for context, not scoring.
Story points, tasks, or units.
Same unit as planned work.
Defects per 100 items, or similar.
Lower targets raise the bar.
Lower targets raise the bar.
Unplanned leave or downtime.
Based on feedback or rubric.
Optional metrics
Enable only metrics you can measure reliably.
Optional metrics expand the index. Weights are auto-ignored when unchecked.
Weight sum
Current sum:
Any sum works. The index normalizes automatically.

Weights (relative importance)

Result appears above after you submit.
Clear

Calculation history

Stored locally for this browser session.
Download CSV Reset
Time Period TPI Category Output On-time Quality Cycle Collab
No calculations yet. Submit the form to create entries.
PDF downloads include the history table and your latest result.

Formula used

1) Normalize each metric to 0–100
  • Output score = min(Completed ÷ Planned, 1.2) ÷ 1.2 × 100
  • Timeliness score = On-time %
  • Quality score = (1 − DefectRate ÷ TargetDefectRate) × 100
  • Cycle score = min(1, TargetCycle ÷ ActualCycle) × 100
  • Availability score = (1 − Absence% ÷ TargetAbsence%) × 100
  • Collaboration = Collaboration ÷ 10 × 100
  • Engagement = Engagement ÷ 10 × 100
  • Optional metrics use the same 0–100 mapping.
Negative values are clamped to 0, and highs to 100.
2) Weighted index
TPI = Σ(Weighti × Scorei) ÷ Σ(Weighti)
Only positive weights are used.
3) Categories
  • 85–100: Exceptional
  • 70–84.99: Strong
  • 55–69.99: Stable
  • 40–54.99: Needs Focus
  • 0–39.99: At Risk

How to use this calculator

  1. Pick a consistent period, such as weekly or per sprint.
  2. Enter planned and completed work in the same unit.
  3. Use targets that match your standards and context.
  4. Turn on optional metrics only when data is reliable.
  5. Adjust weights to match what matters most right now.
  6. Submit and review the result card above the form.
  7. Track history, export reports, and compare periods.

Example data table

Sample values below show how the inputs look in practice.
Period Team Planned Completed On-time % Defect Rate Cycle Days Absence % Collab Engage
Sprint A 6 40 36 85 6 4.5 2.5 7.5 7.0
Sprint B 6 44 46 78 4 3.9 3.2 8.1 7.4
Sprint C 7 52 47 72 9 5.2 4.8 6.9 6.4
Tip: Keep definitions stable to make trends meaningful.

Output and predictability signals

The index blends delivery volume and reliability so leaders can compare periods without losing detail. Output uses completed versus planned work, with overdelivery capped at 120% to prevent gaming. When planned equals 40 and completed equals 36, output becomes 75. A stable team often targets 80–95 output while protecting quality and focus. Pair this with on-time delivery: 90% on-time with 70 output can signal dependency delays, while 90 output with 60% on-time suggests planning gaps.

Quality and rework load

Quality converts defect rate into a score against a chosen target. If defect rate is 6 and target defect rate is 10, quality becomes 40. Improve it by preventing rework, not by hiding defects. Track severity separately, then tune the target to your environment: customer-facing work may use 4–6, internal tooling may tolerate 8–12. If you log defects per release, convert to a per-100 rate to keep scores comparable. For example, 3 defects across 50 items equals 6 per 100.

Flow efficiency and cycle time

Cycle time measures how quickly work moves from start to done. The calculator scores cycle time using target cycle divided by actual cycle, capped at 100. If target is 4 days and actual is 5.2, cycle score is 76.9. Teams that cut cycle time often reduce handoffs, clarify entry criteria, and limit work in progress. As a rule, moving from 6 days to 4 days can raise the cycle score from 66.7 to 100 when the target is 4.

Capacity health and collaboration

Absence rate acts as a capacity proxy. With absence 2.5% and target 5%, availability scores 50, highlighting risk even when output looks strong. Collaboration and engagement use 0–10 ratings, making qualitative data visible. A consistent rubric helps: define 10 as “proactive help,” 5 as “basic handoffs,” and 0 as “blocked by silos.”

Turning results into actions

Use the category bands to steer conversations materially. Exceptional (85–100) suggests scaling practices. Strong (70–84.99) indicates predictable delivery with minor constraints. Stable (55–69.99) benefits from focusing on the lowest sub-score first. Needs Focus (40–54.99) calls for immediate bottleneck removal. At Risk (0–39.99) warrants scope reset, capacity protection, and tighter targets for two periods.

FAQs

What time period should I use?

Pick one cadence and keep it consistent, such as weekly or per sprint. Consistency makes trends meaningful and prevents comparing metrics that reflect different planning horizons, workloads, or definitions.

How should I choose targets for defects and cycle time?

Start with your recent median results, then set targets 10–20% better to encourage improvement without creating noise. Revisit targets quarterly or after major process or staffing changes.

Do weights need to add up to 100?

No. The index normalizes automatically using the sum of positive weights. Choose weights that reflect priorities, then keep them stable for a few periods so changes in the score represent performance, not shifting emphasis.

Why is output capped at 120%?

Capping prevents the index from rewarding extreme overdelivery that may come from under-planning or burnout. It keeps the score balanced so quality, timeliness, and team health remain visible in the overall result.

Can I compare different teams with this index?

You can, if definitions match. Use the same units for planned and completed work, consistent defect counting rules, and similar cycle time boundaries. Otherwise, compare trends within each team and align measurement first.

What if I cannot measure one metric reliably?

Leave it out by setting its weight to zero, or disable optional metrics. It is better to use fewer trustworthy inputs than many uncertain ones. Add metrics later once your data collection stabilizes.

Related Calculators

Employee Output CalculatorKPI Achievement RatePerformance Score CalculatorGoal Completion RateEfficiency Ratio CalculatorOutput Per EmployeeDelivery Performance IndexCapacity Utilization RateResource Utilization RateOutput Variance Calculator

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.