Spot the operational root cause behind every defect. Compare causes using FMEA and Pareto insights. Export results, act faster, and reduce repeat failures today.
| Cause | Defects | Severity | Occurrence | Detection | Cost/defect |
|---|---|---|---|---|---|
| Worn cutting tool | 48 | 7 | 6 | 6 | 3.50 |
| Fixture misalignment | 31 | 8 | 5 | 7 | 6.00 |
| Incoming material variation | 19 | 9 | 4 | 6 | 9.25 |
| Operator setup drift | 22 | 6 | 6 | 5 | 2.00 |
| Inspection sampling gap | 14 | 7 | 3 | 8 | 1.00 |
Start with a clear CTQ definition and a measurable baseline. Log the process step, shift, material lot, machine ID, and inspector. Stratifying defects by these factors often reveals patterns like a single cavity producing 48% of scrap or one supplier lot doubling rework. Use consistent units (ppm, % yield, defects per 1,000) so comparisons remain valid across days and lines. Record sample size and inspection method, since a 100% check differs from a 1-in-10 sample. Attach a defect code or photo so teams use the same definition during containment, analysis, and verification across shifts, sites, and future investigations.
Severity, Occurrence, and Detection convert qualitative concern into a comparable priority number. Rate Severity by customer impact, Occurrence by observed frequency, and Detection by how likely controls will catch the issue. A cause scored 9×6×4 produces an RPN of 216, typically demanding faster action than 6×6×3 at 108. Keep scoring rules documented to reduce team bias.
Not all defects cost the same. Cost per defect can represent scrap, rework labor, warranty exposure, or line stoppage minutes converted to currency. The hybrid score scales risk by normalized cost, helping teams avoid chasing high‑RPN items with trivial loss. For example, a moderate RPN paired with high unit cost may outrank a higher RPN tied to inexpensive rework.
After ranking, the Pareto share and cumulative percent show where effort will return the most improvement. Many plants see that the top few causes explain the majority of defects, so aiming for the first causes that reach 80% cumulative share is a pragmatic target. Use this view to allocate owners, due dates, and verification steps before moving to lower‑impact items.
Once actions are implemented, verify results with the same metric used at baseline. Track defect counts by day and confirm a sustained shift rather than a one‑time dip. Update the control plan: add poka‑yoke, tighten sampling, recalibrate gauges, or revise work instructions. Finally, store the exported report as evidence for audits and as a reference for future incidents.
What does the calculator actually rank?
It ranks suspected causes using defect impact plus your S–O–D scores, optionally adjusted by cost. The output helps prioritize which causes to investigate and fix first, not to prove causality by itself.
How do I choose between RPN, Weighted, and Hybrid models?
Use RPN when safety or compliance risk dominates. Use Weighted when you want balanced control over defect count, S–O–D, and cost via weights. Use Hybrid when cost differences are large and you want risk amplified by loss.
How should we score Detection on the 1–10 scale?
Score low numbers when controls almost always catch the defect before release, such as automated interlocks or 100% vision inspection. Score high numbers when detection is weak, infrequent, or relies on manual sampling with variable judgment.
Can I use this for non-manufacturing quality issues?
Yes. Replace “defect count” with incident frequency, complaints, or audit findings. Severity can reflect customer impact, Occurrence reflects how often it happens, and Detection reflects how reliably your checks identify the issue before it reaches the customer.
What if my main metric is downtime minutes or scrap cost?
Enter downtime minutes as the primary value and set cost per defect to 1, or set primary as count and put downtime value into cost per defect. The ranking stays meaningful as long as the unit is consistent across rows.
How do we validate the top ranked cause before closing actions?
Confirm with evidence: time-stamped checks, controlled trials, before/after capability results, and a sustained improvement period. If possible, isolate the cause by changing one factor at a time and verifying the CTQ returns to baseline when reverted.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.