Enter Supplier Data
Example Data Table
| Supplier | Period | Total | On-time | Received | Accepted | Promised Lead | Actual Lead | Target Resp. | Avg Resp. | Cost Var. | SPI | Rating |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Apex Components | Jan 2026 | 20 | 18 | 10000 | 9850 | 14d | 15d | 24h | 30h | +2.00% | ~91 | Good |
| Nova Metals | Q4 2025 | 35 | 33 | 18000 | 17910 | 10d | 10d | 12h | 9h | -1.50% | ~98 | Excellent |
| Orbit Plastics | Feb 2026 | 12 | 8 | 6000 | 5400 | 7d | 11d | 24h | 60h | +6.00% | ~66 | Needs Improvement |
Formula Used
- Delivery Score (%) = (On-time Deliveries ÷ Total Deliveries) × 100
- Quality Score (%) = (Accepted Units ÷ Total Units) × 100
- Lead Time Score (%) = 100 if Actual ≤ Promised, else (Promised ÷ Actual) × 100
- Response Score (%) = 100 if Avg ≤ Target, else (Target ÷ Avg) × 100
- Cost Score (%) = 100 if Cost Variance ≤ 0, else 100 − Cost Variance
- SPI = Weighted average of the five scores (0–100)
How to Use This Calculator
- Enter totals and counts for the evaluation period.
- Fill lead time and response time fields using averages.
- Input cost variance as a percentage versus agreed pricing.
- Adjust weights to reflect your quality strategy.
- Click Calculate SPI to view results above the form.
- Use Download CSV or Download PDF for reporting.
Why SPI matters in supplier governance
Supplier Performance Index (SPI) turns delivery, quality, and service signals into one consistent score. In a 90‑day review, an SPI of 92 typically indicates stable supply, while 70 often correlates with frequent expediting and inspection overload. Using a single index helps procurement, quality, and operations agree on priorities and prevents debates driven by isolated incidents. Because each component is capped at 100, exceptional performance cannot hide major failures elsewhere. If on‑time is 60% but quality is 99%, a delivery‑heavy weighting makes the shortfall visible. This supports objective supplier conversations and fair comparisons across plants. Use monthly rolling averages to smooth seasonality and one‑off disruptions.
Interpreting the five component scores
Delivery Score reflects schedule reliability, while Quality Score captures accepted units versus total received. Lead Time Score rewards suppliers that meet promised lead times, and Response Score measures how quickly issues are acknowledged and corrected. Cost Score uses percentage variance versus the agreed price; for example, a +4% variance becomes a 96 cost score, supporting transparent commercial discussions.
Setting weights that match risk
Weights should mirror business risk. A regulated product line may set Quality at 40% and Delivery at 25%, while a low‑risk commodity might emphasize Cost at 30%. Keep the sum of weights meaningful, but any proportional set works because the calculator normalizes by total weight. Review weights quarterly so they stay aligned with demand volatility and defect trends.
Using thresholds for action plans
Define thresholds to trigger actions. Many teams use 90–100 as Preferred, 75–89 as Approved, and below 75 as Needs Improvement. Pair each band with a response: maintain status, request a corrective action plan, or launch a supplier development project. Tracking the same supplier over time highlights whether improvements are sustained or temporary.
Reporting and continuous improvement
Exporting CSV supports scorecards and KPI dashboards, while PDF is ideal for audits and supplier meetings. Store inputs alongside outcomes to ensure traceability: volumes, on‑time counts, promised versus actual lead times, and response targets. When SPI moves, investigate which component changed first, then verify with incoming inspection data and delivery logs to close the loop.
FAQs
1) What period works best for scoring suppliers?
Use a consistent window such as monthly, quarterly, or rolling 90 days. Short windows react faster to disruptions, while longer windows reduce noise. Keep the same period when comparing suppliers in the same category.
2) How should partial or split deliveries be counted?
Count each delivery line as one shipment if it has its own due date and receipt. For split shipments against one PO line, treat the final receipt date as the delivery date, or track splits separately if expediting cost is significant.
3) What inputs are required for Lead Time and Response scores?
Enter average promised lead time and average actual lead time in days. For responsiveness, enter the target response time and the measured average response in hours. Values must be greater than zero to compute proportional scores.
4) How do I set weights without biasing the SPI unfairly?
Start with equal weights, then increase the weight for the metric that drives your largest operational risk. Document the rationale and review weights with stakeholders. The calculator normalizes by total weight, so only the proportions matter.
5) Why can a high score still show “Acceptable” or lower?
The rating is based on SPI bands. If one component drops sharply, the weighted average can fall below your threshold even when other components look strong. Review the component scores to identify the main driver.
6) When should I export CSV versus PDF?
Use CSV for dashboards, trend charts, and importing into scorecards. Use PDF for supplier reviews, audits, and sharing a fixed snapshot with management. Save the exported file with the period name for traceability.