Analyzer Inputs
Example data table
Sample matrix snippet showing how relationship scoring drives cause ranking.
| Cause | Category | Returns (9) | Rework (7) | Scrap (6) | Score |
|---|---|---|---|---|---|
| Worn fixture clamps | Machine | Strong | Medium | Medium | (9×9)+(7×3)+(6×3)=126 |
| Operator handling variation | Man | Medium | Strong | Weak | (9×3)+(7×9)+(6×1)=96 |
| Incoming surface defects | Material | Strong | Weak | Strong | (9×9)+(7×1)+(6×9)=142 |
Formula used
This analyzer uses a weighted cause-effect matrix to quantify influence.
| Component | Definition |
|---|---|
| Effect weight | Importance of each effect (e.g., returns, rework, scrap). |
| Relationship value | None=0, Weak=W, Medium=M, Strong=S (customizable numeric values). |
| Cause score | Score(cause) = Σ [ Weight(effect i) × Relationship(cause, effect i) ] |
| Influence % | Influence% = Score / (MaxRelationship × ΣWeights) × 100 |
Higher scores suggest higher leverage for root-cause verification and corrective action.
How to use this calculator
- Write a clear problem statement (the main effect you observe).
- Add 3–6 effects that represent business or quality impact.
- Assign weights to effects based on importance and urgency.
- List potential causes and choose a category (6M recommended).
- For each cause, set relationship strength to each effect.
- Click Analyze, then validate top causes with data and tests.
- Export CSV/PDF to share findings and track improvements.
Mapping
Defect-to-cause mapping strengthens investigations when multiple drivers coexist. Start with one clear problem statement, then list effects such as customer returns, rework hours, scrap cost, or ppm. A practice is to include three to six effects so the team stays focused, yet captures downstream impact. When data exists, use the last 30 to 90 days to define baselines and confirm that the issue is repeatable. Stratify by shift, line, lot, and operator; charts often reveal a dominant 60/40 split or one outlier station within minutes.
Weighting
Selecting effects and weights is where prioritization becomes. If returns threaten reputation, weight it 8–10; if scrap is small, weight it 3–5. Normalize weights when stakeholders disagree on scales, because a 100-point basis makes comparisons easier across projects. Keep weights stable during one analysis cycle, then update them only after a review meeting or a shift in business targets.
Scaling
Relationship ratings translate judgment into structured numbers. The calculator uses None, Weak, Medium, and Strong, mapped to configurable values like 0, 1, 3, and 9. Choose a spread that reflects leverage; a 1–3–9 scale emphasizes standout causes, while a 1–2–4 scale is more conservative. Record evidence notes alongside strong links, for example: torque logs, fixture wear measurements, or supplier inspection results.
Interpreting
Scores and influence percentages help separate noise from leverage. A cause score is the weighted sum of its relationships across all effects. Influence% divides that score by the maximum possible score, creating a comparable scale from 0 to 100. Bands guide attention: High (≥70%) candidates for immediate verification, Medium (40–69%) for targeted checks, and Low (<40%) for monitoring or elimination.
Sustaining
Turning rankings into corrective actions requires validation steps. For each high-band cause, define a test: swap a fixture, run a controlled sample of 20–50 units, or audit an operator method against standard work. Confirm improvement with metrics such as defect ppm, first-pass yield, or Cpk. After actions, rerun the matrix to ensure the influence shifts and the problem stays under control. Lock in gains with control plan updates, owner and due dates, and verify stability across two lots weekly audits.
FAQs
1. What does the Influence% represent?
Influence% compares a cause score to the maximum possible score for your matrix. It standardizes results from 0 to 100, making different projects and weight scales easier to compare.
2. When should I normalize effect weights?
Normalize when teams use different scales, such as 1–5 versus 1–10. It converts weights to a 100-point basis so rankings depend on relative importance, not the original scoring range.
3. How many effects and causes are practical?
Use three to six effects for focus, and five to fifteen causes for coverage. If the list grows larger, group similar causes first, then run separate analyses by line, shift, or product family.
4. Can I change the Weak/Medium/Strong numbers?
Yes. Adjust the relationship values to reflect how sharply you want to separate causes. A 1–3–9 spread highlights standout drivers, while tighter spreads reduce overconfidence in early hypotheses.
5. Does a High band prove a root cause?
No. It flags candidates with high modeled influence. Always verify with data: checks, trials, audits, or controlled runs. Confirm the defect metric improves and stays stable after the countermeasure.
6. How should I use the CSV and PDF exports?
Use CSV for deeper sorting, Pareto charts, and adding evidence notes. Use the PDF to share a snapshot in reviews, including the problem statement, effect weights, and top-ranked causes with bands.