Intersection over Union Calculator

Compute IoU for bounding boxes with confidence. Validate many pairs and summarize metrics fast. Download tables, share insights, and improve training loops.

Calculator

Bounding Box Inputs

Enter one or many GT/Prediction pairs. Add rows for batch evaluation.

White theme Batch mode IoU thresholding

If using (x,y,w,h), x,y are top-left.
Common: 0.50 for detection, 0.75 for strict overlap.
Pairs table
# Ground Truth Prediction Remove
x1y1x2y2 x1y1x2y2
1
2
3
Coordinate note: for corner format, x2/y2 are the bottom-right corner. Any reversed corners are automatically swapped.
Example data table

Sample pairs and expected interpretation

Case GT (x1,y1,x2,y2) Prediction (x1,y1,x2,y2) Typical outcome
Moderate overlap [10,10,60,60] [20,20,70,70] IoU in the mid range; often a match at 0.50.
High overlap [15,30,55,80] [10,25,50,75] Higher IoU; likely a match at 0.50 and 0.75.
No overlap [0,0,40,40] [45,45,90,90] IoU equals 0; always a non-match.
Formula used

Intersection over Union

IoU measures overlap between a predicted box and a ground-truth box. It is defined as the area of intersection divided by the area of union.

IoU = Area( GT ∩ Pred ) / Area( GT ∪ Pred )
Union = Area(GT) + Area(Pred) − Area(Intersection)

This calculator uses axis-aligned rectangles. For rotated boxes, IoU requires polygon intersection.

How to use

Step-by-step

  1. Select your coordinate format: corner or width/height.
  2. Enter GT and prediction values for each pair row.
  3. Optionally set image width and height, then enable clamping.
  4. Set a threshold to classify each pair as a match.
  5. Click Calculate IoU to view the summary above.
  6. Download CSV for reporting, or export PDF for sharing.

Why IoU matters in detection scoring

Intersection over Union quantifies how closely a predicted box overlaps a labeled box. In object detection pipelines, IoU affects true positives, false positives, and the precision–recall curve. With a 0.50 threshold, many benchmarks accept moderate overlap, while 0.75 emphasizes tighter localization. Logging IoU per image helps you spot classes where localization lags even when confidence is high.

Thresholds and practical operating points

Common thresholds include 0.50 for general reporting and 0.75 for strict evaluation. Some workflows sweep 0.50:0.05:0.95 to approximate mAP-style behavior. If your model predicts small objects, a fixed 0.75 may be overly punitive; pairing a lower threshold with size-stratified analysis can reveal whether errors come from scale, crowding, or annotation noise.

Batch review to reduce annotation drift

Evaluating multiple pairs at once is useful for label audits. If your median IoU drops below 0.60 for a stable model, review labeling guidelines and edge cases. Consistent low overlap on one class often indicates inconsistent box tightness. Exported CSV records IoU, intersection, union, and areas, so reviewers can prioritize problematic frames and quantify improvements after relabeling.

Clamping and coordinate sanity checks

Real data can include boxes that extend beyond image boundaries or have swapped corners. Clamping within image width and height prevents negative or inflated areas and keeps metrics comparable. This calculator auto-corrects reversed corners and supports corner or width/height inputs. Use clamping when your training data includes augmentations that may shift boxes outside the visible canvas.

Interpreting the IoU distribution plot

The Plotly chart shows IoU per pair as bars, with a dashed threshold line. A bimodal shape—many near 0.0 and many above threshold—often signals missed detections plus good matches. A single broad hump around 0.40–0.55 suggests systematic misalignment, such as anchor mismatch or resizing artifacts. Aim to raise the entire distribution, not only the maximum, across experiments and datasets.

Reporting-ready outputs for teams

Use the PDF export for quick sharing in reviews and the CSV export for deeper analysis in spreadsheets or notebooks. Track mean and median IoU weekly, and compare match rate at a fixed threshold to detect regressions. When you change image resolution, augmentation, or label policy, keep the threshold constant so trends remain interpretable across versions and datasets consistently.

FAQs

1) What IoU value is considered “good”?

It depends on the task. Many detection reports use 0.50 as acceptable overlap, while fine localization work targets 0.75 or higher. Review the full distribution, not only the mean.

2) Why can IoU be zero even with nearby boxes?

IoU is based on overlap area. If rectangles do not intersect, the intersection area is zero, so IoU becomes zero, even if the boxes are close in position.

3) Does IoU handle rotated bounding boxes?

This calculator uses axis-aligned rectangles. Rotated boxes require polygon intersection and union, which needs different geometry handling than simple width–height overlap.

4) What is the impact of clamping to image bounds?

Clamping prevents boxes from extending outside the image, which can distort areas and comparisons. It is useful when augmentations or label tools produce coordinates beyond the canvas.

5) Should I use (x,y,w,h) or (x1,y1,x2,y2)?

Use whichever matches your dataset. The corner format is common for evaluation code, while width/height is common in some training pipelines. This tool supports both for faster validation.

6) How can I use CSV output for analysis?

Import the CSV into a spreadsheet or notebook, then group by class, size, or source. Track mean, median, and match rate over time to detect regressions after model changes.

Related Calculators

batch size calculatorstride calculatorimage size calculatorimage resolution calculatorpixel density calculatorimage resize scaleanchor box sizefeature map sizereceptive field calculatorpooling output size

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.