Network Throughput Calculator

Model real throughput with protocol overhead and errors. Switch units, test scenarios, and export reports. Make smarter capacity decisions for every critical link today.

Calculator inputs
Choose a mode, tune overheads, then compute throughput and goodput.
Packet mode models overhead; link mode ignores frame details.
Interface line rate before utilization and loss.
%
Average link usage (leave headroom for bursts).
%
Applied as a simple multiplier to payload throughput.
Use only if traffic is symmetric both ways.
Controls formatting for bps and percentages.

Packet and protocol settings
Used in Packet Efficiency mode
Defaults apply typical header sizes automatically.
IPv6 uses a larger base header.
L3 packet max size (IP header + L4 + payload).
Application bytes per packet (excludes headers).
Typical: 14 bytes (without VLAN tag).
Use 4 for a single VLAN tag.
Auto for IPv4/IPv6 unless custom.
Auto for TCP/UDP unless custom.
Typical: 4 bytes.
Typical Ethernet: 8 bytes time-equivalent.
Typical Ethernet: 12 bytes time-equivalent.
TCP window and latency checks
ms
Used for BDP and window-limited throughput.
If too small, throughput may be RTT-limited.
Measured transfer inputs
Used in Measured Transfer mode
Transferred payload or file size.
From first byte sent to last byte received.

Tip: Try the examples below, then adjust MTU and overhead values.

Example data table

Scenario Bandwidth Utilization Protocol MTU Payload Loss Expected goodput (approx.)
Data center link 10 Gbps 70% TCP / IPv4 1500 1460 0.01% ~6.8 Gbps
WAN file transfer 200 Mbps 85% TCP / IPv6 1500 1440 0.20% ~163 Mbps
VoIP stream 50 Mbps 30% UDP / IPv4 1500 160 0.00% ~12 Mbps
Values are illustrative and depend on overhead, burstiness, and real loss behavior.

Formula used

  • Measured throughput: Throughput = (DataBytes × 8) ÷ TimeSeconds
  • On-wire frame size (bytes): Frame = Preamble + IFG + L2 + VLAN + IP + L4 + Payload + FCS
  • Protocol efficiency: Efficiency = Payload ÷ Frame
  • Packet-aware goodput: Goodput = Bandwidth × Utilization × Efficiency × (1 − Loss)
  • Packets per second: PPS = (Bandwidth × Utilization) ÷ (FrameBytes × 8)
  • Bandwidth-delay product: BDPBytes = (Bandwidth × RTT) ÷ 8
  • Window-limited throughput: TCPThroughput = (WindowBytes × 8) ÷ RTT

How to use this calculator

  1. Select a calculation mode that matches your situation.
  2. Enter bandwidth, utilization, and a realistic loss estimate.
  3. For packet-aware modeling, set protocol, MTU, and payload.
  4. Adjust overhead values if your network uses VLAN or IPv6.
  5. Run the calculation and review efficiency and packet rate.
  6. Use RTT and window to spot latency-limited throughput risks.
  7. Export CSV or PDF to share capacity and sizing results.

Throughput vs Goodput

Throughput is the raw bit rate of a link, while goodput is payload that arrives intact. On a 1 Gbps link at 80% utilization, the used line rate is 800 Mbps. If efficiency is 94% and loss is 0.1%, goodput is 800 × 0.94 × (1−0.001) ≈ 751 Mbps. This calculator reports both clearly to separate “wire capacity” from “user payload”.

Overhead and Frame Structure

Every packet includes time and bytes that are not payload. Ethernet preamble and interframe gap consume 20 byte-times per frame, then L2 header, optional VLAN tag, IP header, transport header, and FCS add more. With a 1500-byte MTU and 1460-byte TCP payload on IPv4, a typical on‑wire frame is about 1538 bytes plus preamble and gap, so efficiency stays high. With small payloads, overhead dominates and packets-per-second rises.

Impact of MTU and Payload

Bigger payloads usually improve efficiency because fixed headers are amortized. At 200 Mbps used rate, a 1460-byte payload might yield ~188 Mbps before loss, while a 160-byte payload can drop efficiency below 70%, cutting goodput to ~140 Mbps. Jumbo frames (e.g., 9000-byte MTU) can reduce PPS requirements and CPU interrupts, but only help when every hop supports them. Use MTU and payload fields to evaluate fragmentation risk.

Latency, BDP, and Windowing

Across long paths, throughput can be limited by round-trip time and TCP window size. The bandwidth-delay product is BDP = bandwidth × RTT. At 1 Gbps and 40 ms RTT, BDP is ~5 MB, meaning a window smaller than 5 MB can prevent the sender from filling the pipe. The tool estimates BDP and compares a window against it, highlighting when latency, not bandwidth, is the bottleneck.

Interpreting Results for Capacity Planning

Use the primary metric for your chosen mode, then sanity-check packet rate and efficiency. High PPS with tiny payloads can overload routers, firewalls, and switches even when Mbps looks modest. Compare aggregate full-duplex only when traffic is truly symmetric. For planning, keep utilization below ~85% to absorb bursts. Export CSV or PDF to document assumptions, then retest with worst-case loss and RTT.

FAQs

What is the difference between throughput and goodput?

Throughput reflects bits carried on the link, including overhead. Goodput measures delivered payload after headers and loss. Use goodput when sizing applications, and throughput when validating interface utilization and packet-rate limits.

Which mode should I select for my scenario?

Use Packet Efficiency to model headers and MTU. Use Link Budget for quick capacity estimates from bandwidth, utilization, and loss. Use Measured Transfer when you already know bytes moved and elapsed time.

How should I choose utilization and loss inputs?

Start with monitoring data: average interface utilization and observed loss or retransmissions. For planning, test a conservative utilization like 70–85% and include a small loss factor to account for errors and congestion.

What payload value should I enter for TCP or UDP?

Use your typical application payload per packet. For TCP over a 1500 MTU, payload around 1460 bytes is common with IPv4, slightly smaller with IPv6 or options. Small payloads sharply reduce efficiency.

What does BDP mean and why does TCP window matter?

BDP is the amount of data “in flight” needed to keep the path busy: bandwidth × RTT. If the TCP window is smaller than BDP, the sender stalls waiting for acknowledgments, limiting throughput even on fast links.

Why is my measured throughput lower than the estimate?

Real traffic is bursty and affected by congestion control, queueing, shaping, encryption overhead, and CPU limits. Loss also hurts TCP more than a simple multiplier. Compare PPS and window limits, then validate with real captures and telemetry.

Related Calculators

Latency Measurement ToolBandwidth Requirement CalculatorCache Hit RatioClock Cycle TimeThermal Design PowerEnergy Efficiency CalculatorWorkload Sizing CalculatorConcurrency Level CalculatorThread Count CalculatorQueue Wait Time

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.