ROC Plot
Calculator Inputs
Enter confusion matrix counts. Add optional threshold, sensitivity, specificity rows to calculate AUC from ROC points.
Example Data Table
Use the sample counts and threshold rows below to validate the calculator or explain model performance in audits, presentations, and comparison reports.
| Metric Input | Example Value | Meaning |
|---|---|---|
| True Positives | 86 | Correctly predicted positives. |
| False Positives | 14 | Negatives predicted as positives. |
| True Negatives | 126 | Correctly predicted negatives. |
| False Negatives | 24 | Positives predicted as negatives. |
| Total Records | 250 | Total evaluated observations. |
| Threshold | Sensitivity | Specificity | False Positive Rate |
|---|---|---|---|
| 0.90 | 0.48 | 0.99 | 0.01 |
| 0.70 | 0.69 | 0.95 | 0.05 |
| 0.50 | 0.78 | 0.90 | 0.10 |
| 0.30 | 0.90 | 0.72 | 0.28 |
| 0.10 | 0.98 | 0.40 | 0.60 |
Formula Used
Sensitivity = TP / (TP + FN)
Specificity = TN / (TN + FP)
Precision = TP / (TP + FP)
Accuracy = (TP + TN) / (TP + TN + FP + FN)
F1 Score = 2 × Precision × Sensitivity / (Precision + Sensitivity)
Balanced Accuracy = (Sensitivity + Specificity) / 2
False Positive Rate = FP / (FP + TN) = 1 - Specificity
Youden Index = Sensitivity + Specificity - 1
LR+ = Sensitivity / (1 - Specificity)
LR- = (1 - Sensitivity) / Specificity
MCC = ((TP×TN) - (FP×FN)) / √((TP+FP)(TP+FN)(TN+FP)(TN+FN))
AUC = Trapezoidal area under ROC curve
AUC requires multiple threshold points. Each row contributes one ROC coordinate where TPR = Sensitivity and FPR = 1 - Specificity.
How to Use This Calculator
- Enter your confusion matrix counts for true positives, false positives, true negatives, and false negatives.
- Set decimal places and an export label if you want cleaner report outputs.
- Add optional threshold rows in the format threshold,sensitivity,specificity.
- Click Calculate Metrics to display results above the form under the page header.
- Review the ROC plot, summary interpretation, and advanced metrics for model evaluation.
- Use Download CSV or Download PDF to export the current results.
FAQs
1. Why is AUC unavailable from counts alone?
A single confusion matrix describes one threshold only. AUC needs multiple threshold points or prediction scores to trace the ROC curve.
2. What does high sensitivity mean?
High sensitivity means the model catches most actual positives. It is useful when missing a positive case is expensive or risky.
3. What does high specificity mean?
High specificity means the model rejects most actual negatives correctly. It matters when false alarms create cost, friction, or unnecessary action.
4. When should I enter threshold rows?
Enter threshold rows when you have validation results across several cutoffs. That lets the calculator estimate AUC and plot a real ROC curve.
5. Is a higher AUC always better?
Usually yes for discrimination, but not always for deployment. Threshold choice, class imbalance, costs, and calibration still matter in practice.
6. Can accuracy be misleading?
Yes. Accuracy can look strong on imbalanced datasets while the model misses many positives. Sensitivity, specificity, and MCC often reveal more.
7. What is the Youden index?
The Youden index equals sensitivity plus specificity minus one. It summarizes separation quality and helps compare thresholds objectively.
8. Which metric should I optimize first?
Optimize the metric that matches business cost. For screening, prioritize sensitivity. For strict confirmation, prioritize specificity. For ranking, review AUC.