| Scenario | Severity | Occurrence | Detection | Scrap / Total | Control Strength | RPN | Defect Rate |
|---|---|---|---|---|---|---|---|
| Label misprint escapes to customer | 9 | 4 | 7 | 18 / 1500 | 55% | 252 | 1.200% |
| Loose connector after vibration test | 8 | 6 | 5 | 40 / 2000 | 60% | 240 | 2.000% |
| Minor cosmetic scratch during packing | 4 | 5 | 3 | 25 / 5000 | 75% | 60 | 0.500% |
RPS = RPN × (1 + Defect Rate/100) × (1 + Control Gap/100)
Time Pressure = 1 + min(1, Downtime Hours ÷ 24)
Adjusted Score = RPS × Cost Pressure × Time Pressure
- Define the problem: describe the defect, location, and evidence (lot, shift, test).
- Write the root cause: ensure it is specific and can be proven or disproven.
- Capture the 5 Whys: add causal links to show how the issue was created.
- Score risk: pick Severity, Occurrence, Detection using consistent team criteria.
- Add performance data: enter scrap and total checked to reflect reality.
- Assess controls: enter current control strength to reveal control gaps.
- Export outputs: download CSV for analysis or PDF for audits and reviews.
Prioritization that aligns teams
Quality reviews often stall when teams debate which issue matters most. This calculator converts discussion into a repeatable priority score by combining severity, occurrence, and detection with real defect evidence. When the same scoring logic is used across shifts and lines, meetings move from opinions to action owners and deadlines.
Risk scoring with measurable inputs
The core risk number is RPN, calculated as Severity × Occurrence × Detection, giving a 1–1000 scale. The tool then strengthens that view with Defect Rate and DPMO using Scrap Units and Total Units checked. DPMO normalizes performance for different lot sizes, which is useful when comparing suppliers or product families.
To keep scoring stable, define numeric anchors for each 1–10 scale. For example, map Severity 10 to safety or regulatory risk, 7 to customer return potential, and 3 to internal rework only. Map Detection 10 to no in‑process check, 5 to sampling inspection, and 2 to automated verification. Storing these definitions in your SOP improves repeatability across auditors.
Priority bands can be tuned to your organization’s tolerance. Treat Adjusted Scores above 800 as critical, 400–799 as high, 200–399 as medium, and below 200 as low. Track the band, not the number, so teams see corrective actions move a cause from high to medium to low. Review thresholds quarterly; volume and mix change.
Controls and gaps become visible
Control Strength captures how well current prevention and detection steps work in practice. The Root Priority Score increases as Control Gap grows, highlighting causes that can escape despite inspections. This supports control planning by showing where poka‑yoke, tighter parameters, or improved measurement systems will reduce exposure fastest.
Operational urgency and cost pressure
Not every defect has the same business impact. Optional downtime hours and cost impact apply pressure factors to the score so high-cost stops rise to the top even if defect counts are modest. This is helpful for balancing customer risk, throughput risk, and rework budgets during constrained weeks.
Audit-ready documentation and exports
Each assessment records the problem statement, a root cause summary, and the 5 Whys chain. CSV export supports trend analysis and dashboards, while the PDF export provides a clean review artifact for audits and management review. Used weekly, teams can track score reductions after corrective actions and verify sustained control strength.
FAQs
1) What does the Root Priority Score represent?
It is an amplified risk score that starts with RPN and increases when defect evidence is higher and control strength is weaker. It helps rank root causes consistently across investigations.
2) How should we choose Severity, Occurrence, and Detection values?
Use a shared team rubric. Severity reflects customer or safety impact, Occurrence reflects likelihood, and Detection reflects how likely the issue escapes. Consistency matters more than perfection.
3) Why include both Defect Rate and DPMO?
Defect Rate is intuitive for daily work, while DPMO standardizes defects for large-volume comparisons. Together they support decisions across different lot sizes or inspection samples.
4) What is a good Control Strength percentage?
Higher is better, but it must reflect reality. If audits or escapes occur, lower the value until it matches performance. Use improvements to raise the value over time.
5) When should we use downtime and cost impact inputs?
Use them when production stops, premium freight, or warranty costs materially affect priorities. They help align quality actions with business urgency during capacity or delivery constraints.
6) How do we use exports in continuous improvement?
Save CSV results weekly to plot score trends and defect metrics. Attach the PDF to CAPA records for audits. Recalculate after corrective actions to confirm risk reduction.