Tune arbitration settings before you tape out safely. See utilization, latency, and fairness instantly here. Export results, validate assumptions, and iterate with confidence always.
| Scenario | Masters | Scheme | Bus (MHz / bits) | Burst (bytes) | Overhead+Turn (cycles) | Utilization (%) | Avg grant (ns) |
|---|---|---|---|---|---|---|---|
| Balanced traffic | 4 | Round-robin | 200 / 64 | 64 | 3 | 55 | ~30 |
| Priority favored | 6 | Fixed priority | 150 / 32 | 128 | 4 | 78 | ~120 |
| Noisy contenders | 8 | Lottery | 100 / 128 | 256 | 2 | 62 | ~95 |
Bus arbitration converts many simultaneous requests into one safe transfer stream. Designers target three outcomes: predictable latency, high usable bandwidth, and fair access. The calculator estimates grant delay from masters, overhead, and turnaround, then relates that delay to clock period so teams can compare architectures on a common time scale. It reports per master share during sizing.
Service time is dominated by payload cycles when bursts are large, but control costs dominate small bursts. Data cycles equal ceil(burst bytes divided by bytes per cycle). Total service cycles add arbitration overhead and turnaround. Increasing bus width reduces data cycles, while increasing frequency reduces cycle time, improving both throughput and latency. If your protocol inserts wait states, fold them into overhead so the estimate tracks measured transactions.
Offered load is approximated as masters × request rate × burst size. Dividing by bus capacity yields utilization. When utilization approaches 100%, small changes in request rate can create large queueing delays, even if average throughput remains close to capacity. The utilization panel helps spot risky operating points early. For healthy headroom, many teams target sustained utilization below about 70% and reserve the rest for bursts and interrupts.
Round-robin and lottery policies tend to equalize long run bandwidth shares. Fixed priority improves response for urgent masters but can starve low priority agents during heavy contention. The modeled Jain fairness index summarizes equality, while the starvation risk flag highlights configurations where skew and high load combine to create repeated deferrals. If you must use priority, consider adding aging or maximum grant streaks to keep tail latency bounded.
Start with measured overhead and turnaround from your bus protocol, then sweep burst sizes from trace statistics. Compare average and worst grant latency across schemes, and verify that throughput at expected load meets margin. Export CSV for spreadsheet plots and PDF for review packets, keeping assumptions visible for every stakeholder. Finally, rerun with peak request rates from stress tests to validate that low priority paths still meet functional progress and watchdog limits.
Grant latency is the time from a request becoming eligible until the master receives ownership. It is modeled in cycles and converted to nanoseconds using the bus clock period.
They consume cycles without moving payload data. When bursts are small, these fixed costs can dominate service time, reducing effective bandwidth and increasing per-transaction delay.
They are sizing approximations meant for early trade studies. Actual latency depends on traffic correlations, protocol rules, and buffering; validate final numbers with simulation or hardware measurements.
Use it when a specific master must meet tight deadlines and can preempt others. Add safeguards such as aging, quotas, or maximum consecutive grants to reduce starvation under load.
It indicates bandwidth shares are nearly equal across masters over the measured window. Lower values suggest some masters receive consistently more service than others.
Derive it from workload traces or bus monitors. Convert observed transactions per second into requests per millisecond per master, and include peak phases rather than only long-run averages.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.