Calculator
Example configurations
| Profile | Rate (MT/s) | Width (bits) | Channels | Overhead | Efficiency | Est. Effective (GB/s) |
|---|---|---|---|---|---|---|
| DDR5 dual-channel | 5600 | 64 | 2 | 3% | 85% | ≈ 74.1 |
| DDR4 dual-channel | 3200 | 64 | 2 | 4% | 80% | ≈ 49.2 |
| HBM3 stack | 6400 | 1024 | 1 | 2% | 90% | ≈ 723.5 |
Formula used
bytes_per_second = MT/s × 10⁶ × bytes_per_transfer × channels
bandwidth = bytes_per_second ÷ (10⁹ or 1024³)
effective = after_overhead × efficiency%
How to use this calculator
- Select Input mode based on your specification sheet.
- Pick a memory standard to prefill a reasonable factor.
- Enter rate/clock, plus bus width and channels.
- Adjust protocol overhead and efficiency for realism.
- Press Calculate, then download CSV or PDF if needed.
Interpreting MT/s and clock rates
MT/s measures transfers per second, not oscillator cycles. For DDR-style interfaces, the effective rate is usually two transfers per base clock. A rating of 5600 MT/s corresponds to a base clock near 2800 MHz. Use this calculator to enter MT/s directly, or enter MHz plus a transfers‑per‑clock factor when only the clock is known.
Bus width and channel scaling
Width and channels scale bandwidth almost linearly. A 64‑bit channel carries 8 payload bytes per transfer; two channels carry 16 bytes. At 5600 MT/s, theoretical payload bandwidth is about 89.6 GB/s before stalls. For DDR5, subchannels still sum to one 64‑bit channel in planning spreadsheets today. If you model ECC, remember extra check bits increase physical width, but payload bytes often remain 64 bits per channel for application throughput estimates.
Overhead and efficiency assumptions
Peak bandwidth is rarely sustained because real traffic includes commands, refresh, turnarounds, and timing gaps. Overhead is commonly 2–8% depending on access size and read/write switching. Efficiency represents how well software and the controller keep the bus busy. Streaming workloads can reach 85–95%, while random access with frequent row misses may fall near 50–70%. Start with conservative values, then refine using profiler or counter data.
Comparing DDR, GDDR, and HBM links
Memory families trade pins, frequency, and width. GDDR usually drives higher per‑pin transfer rates with narrower channels, while HBM relies on very wide buses per stack. A 1024‑bit interface at 6400 MT/s is about 819.2 GB/s theoretical, which suits GPUs and accelerators. Normalize results to one unit: GB/s uses decimal bytes, while GiB/s uses binary bytes and reports a smaller number for the same link.
Using results for design decisions
Use theoretical bandwidth to sanity‑check specifications, then use effective bandwidth for sizing caches, prefetch depth, and interconnect capacity. Record the overhead and efficiency assumptions beside each comparison so results stay reproducible across releases. If effective bandwidth is far below peak, investigate burst length, queue depth, bank conflicts, page policy, and NUMA placement before committing to hardware changes.
FAQs
1) What is the difference between theoretical and effective bandwidth?
Theoretical bandwidth is the peak bus rate from MT/s, width, and channels. Effective bandwidth also applies protocol overhead and utilization, reflecting typical workload behavior and controller limits.
2) Should I report GB/s or GiB/s?
Use GB/s for decimal reporting aligned with most datasheets. Use GiB/s when your tooling and memory sizing use binary units. The calculator supports both so your comparisons stay consistent.
3) How do I pick a protocol overhead percentage?
If you lack measurements, start with 3–5% for DDR-style systems. Increase it for frequent turnarounds, refresh-heavy periods, or small transfers. Decrease it for long streaming bursts and well-scheduled controllers.
4) Why doesn’t adding channels always scale linearly?
Channels increase peak throughput, but software may not generate enough parallel requests. Cache hits, interconnect limits, and bank conflicts can cap realized gains. Use the efficiency input to reflect these bottlenecks realistically.
5) How should I model ECC and DDR5 subchannels?
For payload bandwidth, keep width at 64 bits per channel even if ECC adds extra check bits. Treat DDR5’s two 32-bit subchannels as one 64-bit channel unless you are modeling controller-level scheduling detail.
6) Can I use this calculator for GPUs or custom memory links?
Yes. Enter the effective MT/s, total bus width, and channel count that match your link definition. For non-DDR signaling, set transfers-per-clock to the correct pumping factor when deriving MT/s from a clock.