Forecast peak load before buying or upgrading firewalls. Compare scenarios with policies, VPN, and inspection. Export results to share with security and network teams.
| Scenario | Rated (Gbps) | Peak (Gbps) | Features | Growth | Target util | Suggested units |
|---|---|---|---|---|---|---|
| Branch internet edge | 5 | 2.2 | IPS, URL filtering | 20% | 65% | 2 (active-passive) |
| Campus core egress | 40 | 28 | IPS, app control, high logging | 30% | 60% | 3 + spare |
| Data center inspection | 100 | 70 | SSL inspection, IPS, sandboxing | 25% | 55% | 4 + standby |
Capacity planning starts with a defensible peak baseline. Use the highest 5–15 minute interval from telemetry, then compare it with daily and weekly maxima to identify recurring surges. Many enterprises see 2–4× differences between average and peak egress. If your peak is 6 Gbps on a 10 Gbps platform, a 25% growth buffer raises demand to 7.5 Gbps, which can materially change the recommended unit count.
Deep inspection features reduce effective capacity because they add CPU work per packet and per flow. The planner treats enabled features as a conservative throughput reduction. For example, IPS plus anti-malware can remove roughly 30% of rated throughput, while TLS decryption and inspection can remove 35% or more depending on cipher mix and certificate handling. Use this section to build “minimal,” “standard,” and “maximum inspection” scenarios.
Rulebase scale and translation work affect lookup and session processing cost. The calculator adds mild overhead as policy rules exceed 500, up to a capped ceiling, reflecting increasingly complex matching paths. NAT share is modeled as a small incremental cost that grows with translated traffic. Logging level adds overhead to represent heavier event generation and storage pressure during incident spikes.
Growth is applied to peak traffic to estimate future demand, aligning with a 12–24 month planning horizon. Target utilization represents the steady-state ceiling you want per unit; many teams choose 55–70% to absorb bursts, failover conditions, and software updates. Lower targets improve resilience but may increase appliance count. Validate the selected target against your recovery objectives and link capacity.
Utilization is computed against effective capacity after overhead. “Moderate” risk generally indicates sustainable operation with reasonable buffer, while “High” suggests reduced tolerance for bursts, rekey events, and traffic mix shifts. Recommended units keep future peak below the target utilization threshold, then add redundancy guidance for active-passive or a spare for active-active designs. Use headroom (Gbps) as your quick indicator for how much unexpected demand you can safely absorb.
Use the vendor’s published throughput closest to your enabled feature set. If only a baseline figure is available, enter it and select features to approximate real-world inspection overhead.
Decryption adds per-session key exchange work and per-packet processing. Cipher mix, certificate validation, and hardware acceleration availability can significantly change effective throughput under load.
Pick a steady-state ceiling that preserves burst tolerance and failover headroom. Many environments aim for 55–70%, then validate with incident simulations and maintenance windows.
They are displayed as planning indicators. Throughput is the primary sizing axis in this model, but high CPS and session counts can expose memory, table, and CPU limits during real deployments.
It is a simple composite signal using utilization and modeled overhead. Treat it as directional guidance, not a certification. Confirm final sizing with vendor guidance and testing.
Yes, by modeling one node’s rated throughput and setting redundancy mode appropriately. For virtual appliances, ensure your vCPU, crypto offload, and hypervisor constraints match the rated assumptions.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.