Track allowed failures, downtime minutes, and request health. See burn rate, consumption, and release risk. Plan incidents, launches, and freezes using dependable service signals.
Enter your SLO window, request counts, and downtime data. Results appear above this form after submission.
This worked example shows how the calculator behaves with a realistic 30-day SRE window.
| Service | SLO | Window | Total Requests | Failed | Slow | Weight | Downtime | Allowed Bad Events | Actual Bad Events | Burn Rate |
|---|---|---|---|---|---|---|---|---|---|---|
| Checkout API | 99.90% | 30 days | 2,000,000 | 850 | 400 | 100% | 18 min | 2,000 | 1,250 | 0.63x |
| In this example, 62.50% of the request budget is consumed, leaving 750 weighted bad events for future releases. | ||||||||||
Error Budget % = 100 − SLO Target %
Allowed Bad Events = Total Requests × (Error Budget % ÷ 100)
Weighted Slow Requests = Slow Requests × (Slow Weight % ÷ 100)
Actual Bad Events = Failed Requests + Weighted Slow Requests
Observed Availability % = ((Total Requests − Actual Bad Events) ÷ Total Requests) × 100
Event Consumed % = (Actual Bad Events ÷ Allowed Bad Events) × 100
Allowed Downtime = Effective Minutes × (Error Budget % ÷ 100)
Effective Minutes = Window Minutes − Planned Maintenance Minutes
Burn Rate = Actual Error Rate ÷ Allowed Error Rate
Overall Consumed % = max(Event Consumed %, Downtime Consumed %)
Overall Remaining % = 100 − Overall Consumed %
Step 1: Enter the service name and your SLO target, such as 99.9%.
Step 2: Set the measurement window in days, usually 7, 28, or 30.
Step 3: Add total requests, failed requests, and slow requests for the same period.
Step 4: Choose how much slow traffic should count against the budget.
Step 5: Enter downtime minutes and exclude any planned maintenance minutes.
Step 6: Add incident count and release count for a better release-pressure view.
Step 7: Click Calculate Error Budget to see the result summary above the form.
Step 8: Use the CSV or PDF buttons to export the metrics.
An error budget is the amount of unreliability your service can spend while still meeting its SLO. It helps teams balance feature delivery and reliability work.
Many teams treat severe latency breaches as user-visible failures. Weighting slow requests lets you reflect that impact without always counting every slow request as a full error.
Burn rate compares actual error spending with the allowed rate. A burn rate above 1 means the budget is disappearing faster than the SLO can safely support.
Planned maintenance is often excluded from monitored time because it is expected and controlled. Removing it gives a cleaner view of unexpected reliability loss.
Use the same window your team uses for SLO reporting. Thirty days is common, but weekly windows can be useful for faster operational decisions.
Yes. Request-level data captures user impact during degraded service, while downtime captures complete service unavailability. Comparing both gives a more conservative reliability picture.
It estimates delivery pressure by comparing incidents with release volume. Higher values may signal weak testing, risky deployment patterns, or insufficient rollback controls.
Slow releases when budget consumption is high, burn rate exceeds 1, or incidents spike. That usually means reliability work should take priority over risky change volume.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.