Inputs
Example data table
| Scenario | Instances | Hours | IT energy (kWh) | PUE | Intensity (g/kWh) | Estimated kg CO2e |
|---|---|---|---|---|---|---|
| Small web app | 2 | 720 | 42.87 | 1.20 | 400 | 20.58 |
| Analytics batch | 8 | 120 | 63.55 | 1.25 | 700 | 55.61 |
| Media delivery | 4 | 300 | 357.28 | 1.30 | 350 | 162.56 |
Formula used
- Idle power: idleW = maxW × idleFraction
- Effective power: effectiveW = idleW + (maxW − idleW) × utilization
- Compute energy: computeKWh = instances × hours × effectiveW ÷ 1000
- Storage energy: storageKWh = storageGBMonth × storageKWhPerGBMonth
- Transfer energy: transferKWh = transferGB × transferKWhPerGB
- IT energy: itKWh = computeKWh + storageKWh + transferKWh
- Facility energy: facilityKWh = itKWh × PUE
- Effective intensity: effG = gridG × (1 − renewableShare)
- Operational emissions: operationalKg = facilityKWh × effG ÷ 1000
- Embodied emissions: embodiedKg = instances × hours × embodiedGPerHour ÷ 1000
- Total emissions: grossKg = operationalKg + embodiedKg
- Net emissions: netKg = max(0, grossKg − offsetsKg)
How to use this calculator
- Enter instance count and runtime hours for your reporting period.
- Set max power, utilization, and idle fraction using monitoring data.
- Provide storage volume and data egress for the same period.
- Choose a grid intensity preset or enter a local value.
- Adjust PUE and renewable share to match your hosting setup.
- Optionally add embodied emissions and offsets for net totals.
- Press Estimate to view results above the form.
- Use CSV or PDF export for reviews, audits, and planning.
Why cloud emissions vary
Cloud impact depends on electricity mix, facility efficiency, and workload shape. Two identical apps can differ widely when regions use different grids, climates, and cooling designs. This estimator separates IT energy from site overhead, then applies location intensity and renewable share to convert kilowatt-hours into carbon. Treat results as directional unless you have provider reports.
Translating compute into energy
Compute energy is modeled from instance count, runtime, and power behavior. Effective power blends idle draw with load draw using average utilization. This mirrors real servers that consume energy even when lightly used. Use monitoring to estimate utilization from CPU, memory, and accelerator metrics, then sanity-check with billing hours. Autoscaling that tracks demand usually lowers average effective power. When you know only instance-hours, start with conservative max power and refine later. For GPU workloads, set higher max power and realistic utilization. Consider shutting down idle resources overnight, rightsizing instance families, and using spot capacity for batch jobs to cut cost and emissions.
Storage and data transfer drivers
Storage is expressed in GB-month to match how capacity persists across time. An energy factor per GB-month captures disk type, replication, encryption, and durability. Data egress is measured in gigabytes and multiplied by an energy-per-GB factor to represent routing, switching, and edge delivery. For media platforms, transfer can rival compute, while databases often skew toward storage due to redundancy and backups.
Understanding PUE and grid intensity
PUE scales IT energy to reflect cooling, power conversion, and building systems. A PUE closer to one means less overhead. Grid intensity, in grams per kWh, converts facility energy into emissions. Renewable share reduces the effective intensity for scenario testing, but you should align it with credible accounting methods. If you add embodied emissions per instance-hour, you can approximate hardware manufacturing allocation.
Turning results into decisions
Use the breakdown table to find the best levers. Lower utilization signals overprovisioning, while high transfer points to caching and compression. Compare scenarios by changing region intensity, PUE, or workload hours. Export CSV for audit trails and PDF for stakeholders, then track improvements over time. Offsets can be entered to view net figures, but prioritize real reductions first.
FAQs
1. What should I enter for grid intensity?
Use a value published for your region or your provider’s sustainability reporting. If unavailable, start with 400–700 g CO2e/kWh as a sensitivity range. Keep the same value when comparing optimization options so changes reflect workload decisions.
2. How do I estimate max power per instance?
Use hardware specs, monitoring, or a conservative proxy like 40–120 W for general-purpose CPU instances. For GPU or high‑performance shapes, use higher values. The estimate matters most when runtime is long and utilization is steady.
3. Does renewable share make emissions zero?
No. It reduces the effective intensity for scenario modeling. Real accounting depends on contracts, certificates, and location-based versus market-based methods. Use renewable share to test how greener supply changes outcomes, not to claim neutrality by default.
4. Why is PUE included?
PUE captures data center overhead such as cooling, power distribution, and lighting. IT energy alone understates total electricity use. If you do not know PUE, 1.2 is a reasonable modern baseline, while older facilities can be higher.
5. How do storage and transfer factors work?
They translate capacity and data movement into energy. Lower factors may fit efficient storage tiers and short network paths; higher factors may reflect replication, frequent reads, or global delivery. Adjust them to match your architecture and provider guidance.
6. Can I use this for monthly reporting?
Yes. Enter monthly runtime hours, average GB‑month stored, and total transfer for the month. Export CSV for documentation and keep your assumptions consistent. For formal reporting, reconcile with provider invoices and any official emissions reports.