AUC from Confusion Matrix Calculator

Convert confusion matrix counts into reliable model metrics, exports, and a clear ROC view instantly. Judge binary classifier quality with confidence and consistent interpretation.

Calculator

Enter confusion matrix counts below. The page keeps a single-column flow, while input fields adjust to three columns on large screens, two on medium screens, and one on mobile.

Reset

Example Data Table

Scenario TP FP TN FN Sensitivity Specificity Approx. AUC
Model A 92 14 136 18 83.6364% 90.6667% 0.8715
Model B 75 35 115 25 75.0000% 76.6667% 0.7583
Model C 48 10 182 20 70.5882% 94.7917% 0.8269
Model D 130 42 210 28 82.2785% 83.3333% 0.8281

Formula Used

Sensitivity / Recall (TPR): TP / (TP + FN)
Specificity (TNR): TN / (TN + FP)
False Positive Rate (FPR): FP / (FP + TN)
Balanced Accuracy: (TPR + TNR) / 2
Approximate AUC from one confusion matrix: (TPR + TNR) / 2
Equivalent trapezoid form: (1 + TPR - FPR) / 2
Accuracy: (TP + TN) / (TP + FP + TN + FN)
F1 Score: 2TP / (2TP + FP + FN)
MCC: (TP×TN - FP×FN) / √((TP+FP)(TP+FN)(TN+FP)(TN+FN))

Important note: a true ROC AUC normally requires prediction scores or many thresholds. With only one confusion matrix, this calculator reports the common single-threshold approximation based on the ROC point defined by FPR and TPR.

How to Use This Calculator

  1. Enter the four confusion matrix counts: TP, FP, TN, and FN.
  2. Set the positive and negative class labels if you want custom names.
  3. Choose how many decimal places should appear in the output.
  4. Click Calculate AUC to show results below the header and above the form.
  5. Review the approximate AUC, sensitivity, specificity, F1 score, MCC, and related diagnostics.
  6. Inspect the Plotly ROC-style graph to see the operating point and approximation path.
  7. Use Download CSV or Download PDF to export the current analysis.
  8. Compare your result with the example table to benchmark model behavior quickly.

FAQs

1) What does this calculator estimate?

It estimates a single-threshold AUC approximation from TP, FP, TN, and FN. It also reports sensitivity, specificity, balanced accuracy, F1 score, MCC, and several related classification metrics.

2) Is this the same as true ROC AUC?

No. True ROC AUC usually needs prediction probabilities or scores across many thresholds. A single confusion matrix only supports an approximation built from one operating point.

3) Why does the formula use balanced accuracy?

For one ROC point, the trapezoid through (0,0), (FPR,TPR), and (1,1) has area equal to (TPR + TNR) / 2. That is also balanced accuracy.

4) When is this approximation useful?

It is useful for quick threshold reviews, dashboard summaries, and side-by-side model checks when only the confusion matrix is available.

5) Can I use precision and recall alone?

Not reliably. You need the original confusion matrix counts, or enough information to rebuild them, before computing this AUC approximation and the other metrics.

6) Does this help with imbalanced datasets?

Yes. Metrics like balanced accuracy, sensitivity, specificity, and MCC often tell a better story than plain accuracy when the positive class is rare.

7) What approximate AUC values are usually considered good?

Many teams read 0.90+ as excellent, 0.80–0.89 as good, 0.70–0.79 as fair, and below 0.70 as weak. Context still matters.

8) Do class names change the calculation?

The numeric result stays the same if counts stay consistent. However, changing which class is considered positive changes how sensitivity and specificity are interpreted.

Related Calculators

precision recall aucsensitivity specificity auccross validation auc

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.