Enter supplier CAPA data
Example data table
| Supplier | Total | Closed | On time | Avg days | Repeats | Audit | Evidence | Criticality | Score | Rating |
|---|---|---|---|---|---|---|---|---|---|---|
| Alpha Components | 12 | 10 | 8 | 34.5 | 1 | 92 | 4 | 4 | 83.62 | Effective |
| Beta Plastics | 8 | 6 | 3 | 52.0 | 2 | 78 | 3 | 3 | 66.10 | Needs Improvement |
| Gamma Metals | 15 | 15 | 15 | 24.0 | 0 | 96 | 5 | 5 | 93.40 | Excellent |
Numbers above are illustrative for demonstrating inputs and outputs.
Formula used
- Closure completion score = (Closed CAPAs ÷ Total CAPAs) × 100
- Timeliness score = (Closed on time ÷ Closed CAPAs) × 100
- Repeat prevention score = (1 − Repeats ÷ Total CAPAs) × 100
- Evidence score = (Evidence rating ÷ 5) × 100
- Speed score = 100 − max(0, (Avg days − Target days) × 2)
Weighted base score:
+ 0.15·Speed + 0.10·Evidence + 0.10·Completion
Criticality adjustment:
Penalty ranges 0% (criticality 1) to 6% (criticality 5)
The weighting emphasizes timeliness, recurrence prevention, and verification. Adjust target days and criticality to align with your supplier governance standards.
How to use this calculator
- Choose a period (month/quarter) and gather all supplier CAPAs opened in that timeframe.
- Enter totals, closures, on-time closures, average days to close, and repeat nonconformances within 90 days.
- Add the verification audit score from your effectiveness check or follow-up assessment.
- Rate evidence quality based on objective proof: procedures, training, data, and validation results.
- Set product criticality and target days to match your risk profile and expectations.
- Review the component scores, strengths, and gaps, then use the recommended actions for follow-up planning.
Professional notes
CAPA effectiveness as a risk signal
Supplier CAPA effectiveness is a leading indicator of external quality risk. Closing actions fast is not enough if defects return, audits fail, or evidence is weak. By converting routine CAPA data into a standardized score, this calculator helps quality teams compare suppliers fairly, spot deteriorating performance early, and focus limited follow‑up resources on the highest exposure items before they become customer issues. It also supports transparent communication with procurement and engineering when corrective action performance affects approvals.
Balanced metrics improve decisions
A strong assessment blends completion, timeliness, recurrence prevention, verification results, evidence quality, and closure speed. The weighted approach prevents “gaming” a single metric, such as closing many low-impact actions while critical ones drift. For example, a high closure rate with low verification audit score suggests documentation without sustained control, while good audits with slow closure may indicate resourcing constraints or unclear milestones. Weighting can be tuned to match governance priorities, but consistent inputs are essential for comparisons.
Criticality tightens expectations
Product criticality changes what “good” looks like. For safety, regulated, or high-value parts, the tolerance for delay or weak validation is lower. The calculator applies a modest criticality adjustment and shifts rating thresholds upward, encouraging deeper root-cause analysis, tighter due-date governance, and stronger evidence packages when consequences are severe. This supports consistent oversight across mixed portfolios without hiding risk in averages.
From gaps to supplier actions
Component gaps should trigger targeted actions. Low timeliness often improves with defined stages: containment within 48 hours, root cause within ten days, and weekly status reviews with escalation rules. Low repeat-prevention points to shallow analysis or non-systemic fixes; require updated control plans, mistake-proofing, and verification over an extended window. Weak evidence quality calls for objective artifacts—revised procedures, training records, capability data, and test results.
Governance and continuous improvement
Use the score in monthly supplier business reviews and trend it by period, site, or commodity group. Combine it with defect rate, delivery performance, and cost of poor quality to build a balanced supplier dashboard. When scores improve, document which interventions worked—layered process audits, joint problem-solving workshops, or revised inspection plans—so effective practices can be replicated. Treat the score as a management tool, not a punishment. Publish a simple action log, assign owners, and confirm effectiveness checks are completed on schedule.
FAQs
1) What does the final score represent?
It summarizes completion, on-time closure, recurrence prevention, verification results, evidence quality, and closure speed into one 0–100 effectiveness indicator.
2) Why include product criticality?
Critical items need tighter expectations. The calculator applies a small penalty and higher rating thresholds so risk is not underestimated.
3) How should repeats be counted?
Count the same or closely related nonconformance that reappears within 90 days after closure, using consistent defect linkage rules.
4) What is a good verification audit score?
Scores above 90 typically indicate strong control after actions. Use consistent criteria and sampling for fair comparisons.
5) Can we change the target days to close?
Yes. Set target days to match your governance standard. Lower targets increase the closure speed expectation.
6) How often should we calculate this score?
Monthly or quarterly works well. Trending over time supports supplier reviews, prioritization, and escalation decisions.