Calculator
Enter confusion matrix counts below. The page keeps a single-column flow, while input fields adjust to three columns on large screens, two on medium screens, and one on mobile.
Example Data Table
| Scenario | TP | FP | TN | FN | Sensitivity | Specificity | Approx. AUC |
|---|---|---|---|---|---|---|---|
| Model A | 92 | 14 | 136 | 18 | 83.6364% | 90.6667% | 0.8715 |
| Model B | 75 | 35 | 115 | 25 | 75.0000% | 76.6667% | 0.7583 |
| Model C | 48 | 10 | 182 | 20 | 70.5882% | 94.7917% | 0.8269 |
| Model D | 130 | 42 | 210 | 28 | 82.2785% | 83.3333% | 0.8281 |
Formula Used
TP / (TP + FN)TN / (TN + FP)FP / (FP + TN)(TPR + TNR) / 2(TPR + TNR) / 2(1 + TPR - FPR) / 2(TP + TN) / (TP + FP + TN + FN)2TP / (2TP + FP + FN)(TP×TN - FP×FN) / √((TP+FP)(TP+FN)(TN+FP)(TN+FN))Important note: a true ROC AUC normally requires prediction scores or many thresholds. With only one confusion matrix, this calculator reports the common single-threshold approximation based on the ROC point defined by FPR and TPR.
How to Use This Calculator
- Enter the four confusion matrix counts: TP, FP, TN, and FN.
- Set the positive and negative class labels if you want custom names.
- Choose how many decimal places should appear in the output.
- Click Calculate AUC to show results below the header and above the form.
- Review the approximate AUC, sensitivity, specificity, F1 score, MCC, and related diagnostics.
- Inspect the Plotly ROC-style graph to see the operating point and approximation path.
- Use Download CSV or Download PDF to export the current analysis.
- Compare your result with the example table to benchmark model behavior quickly.
FAQs
1) What does this calculator estimate?
It estimates a single-threshold AUC approximation from TP, FP, TN, and FN. It also reports sensitivity, specificity, balanced accuracy, F1 score, MCC, and several related classification metrics.
2) Is this the same as true ROC AUC?
No. True ROC AUC usually needs prediction probabilities or scores across many thresholds. A single confusion matrix only supports an approximation built from one operating point.
3) Why does the formula use balanced accuracy?
For one ROC point, the trapezoid through (0,0), (FPR,TPR), and (1,1) has area equal to (TPR + TNR) / 2. That is also balanced accuracy.
4) When is this approximation useful?
It is useful for quick threshold reviews, dashboard summaries, and side-by-side model checks when only the confusion matrix is available.
5) Can I use precision and recall alone?
Not reliably. You need the original confusion matrix counts, or enough information to rebuild them, before computing this AUC approximation and the other metrics.
6) Does this help with imbalanced datasets?
Yes. Metrics like balanced accuracy, sensitivity, specificity, and MCC often tell a better story than plain accuracy when the positive class is rare.
7) What approximate AUC values are usually considered good?
Many teams read 0.90+ as excellent, 0.80–0.89 as good, 0.70–0.79 as fair, and below 0.70 as weak. Context still matters.
8) Do class names change the calculation?
The numeric result stays the same if counts stay consistent. However, changing which class is considered positive changes how sensitivity and specificity are interpreted.