| Rank | Initiative | BV | TC | RR | Size | CoD | WSJF |
|---|---|---|---|---|---|---|---|
| 1 | Mobile onboarding refresh | 8 | 5 | 3 | 3 | 16 | 5.33 |
| 2 | Checkout performance | 13 | 8 | 5 | 5 | 26 | 5.2 |
| 3 | Fraud detection rules | 5 | 13 | 8 | 8 | 26 | 3.25 |
| 4 | Infrastructure upgrade | 3 | 5 | 13 | 8 | 21 | 2.63 |
| 5 | Analytics self-serve dashboard | 8 | 3 | 5 | 13 | 16 | 1.23 |
Cost of Delay (CoD) is computed as: CoD = (BV × wBV) + (TC × wTC) + (RR × wRR)
WSJF is computed as: WSJF = CoD ÷ Job Size
Higher WSJF means more cost-of-delay avoided per unit of effort. If you keep weights at 1, the calculator matches the classic WSJF approach.
- List each initiative you want to compare.
- Assign BV, TC, and RR using one consistent scale.
- Estimate job size using relative effort points.
- Optional: adjust weights to match your strategy.
- Click Calculate WSJF to rank items.
- Export CSV for spreadsheets or PDF for sharing.
Why WSJF Improves Prioritization
WSJF turns debate into comparable numbers by dividing delay impact by effort. Teams that score the same backlog each sprint can see rank changes when urgency shifts. If initiative A has CoD 30 and size 5, its score is 6.0, while CoD 24 at size 2 scores 12.0, usually taking priority. This calculator keeps the math consistent so decisions are repeatable, not driven.
Choosing Consistent Scoring Scales
Use one scale across all rows so ratios stay meaningful. Common options are 1–10, 1–20, or Fibonacci-like steps such as 1,2,3,5,8,13. Mixing scales inflates some items and hides others. When stakeholders disagree, calibrate with two reference items: one “quick win” and one “big strategic bet”. Then score every new idea relative to those anchors and update quarterly.
Interpreting Cost of Delay Components
Cost of Delay is the weighted sum of value, urgency, and risk reduction. Value can reflect revenue, retention, NPS, or compliance outcomes. Urgency captures deadlines, seasonal windows, and compounding losses from waiting. Risk reduction includes de-risking technology, learning, and unlocking options. With weights, you can emphasize what matters now; for example, raising urgency weight from 1 to 1.5 will push deadline-driven work upward without rewriting every score.
Sizing Work for Fair Comparisons
Job size should represent relative effort, not calendar time. Use the same estimation approach for all items, and keep sizes above zero to avoid distortion. If an initiative is uncertain, split it into a discovery slice and a delivery slice. Smaller slices often reveal higher WSJF because they reduce delay faster. As a rule, if an item exceeds your sprint capacity, break it down until each slice can be delivered within one to two iterations.
Using Exports for Stakeholder Alignment
Exported tables help share the ranking and the inputs behind it. A CSV can be filtered by product area, owner, or quarter, while a PDF snapshot supports reviews and approvals. In governance meetings, focus on the top five scores and the assumptions behind them, not every row. Re-score after major signals such as new customer data, incidents, or regulatory dates to keep the roadmap current.
1) What does WSJF stand for?
It stands for Weighted Shortest Job First. It prioritizes initiatives by dividing Cost of Delay by Job Size, highlighting the best value-per-effort options for the next planning cycle.
2) How do I choose BV, TC, and RR scores?
Pick a single scale and score consistently. BV reflects benefit, TC reflects urgency, and RR reflects uncertainty reduction or enablement. Use anchors and revisit scores when new evidence changes outcomes.
3) What if two initiatives have the same WSJF?
Treat them as roughly equivalent. Use tie-breakers such as higher Cost of Delay, smaller size, strategic alignment, or dependency order. If the tie persists, run a short experiment to reduce uncertainty.
4) Should Job Size be hours or story points?
Use relative sizing, not exact hours. Story points or effort buckets work well because they compare work consistently. Keep the method stable across teams so WSJF ratios remain meaningful.
5) When should I adjust the weights?
Adjust weights when strategy changes. For example, raise urgency weight during a deadline period, or increase risk weight during technical modernization. Avoid frequent changes; instead, review weights at regular planning checkpoints.
6) Is WSJF enough for final roadmap decisions?
It is a strong starting point, not a complete decision system. Also consider capacity, dependencies, regulatory needs, and customer commitments. Document exceptions so stakeholders understand why an item moved up or down.