Calculator Inputs
Enter confusion matrix counts to evaluate binary classifier quality with a balanced correlation metric.
Matthews Correlation Coefficient Formula
MCC measures the correlation between predicted and actual binary classes. Unlike accuracy, it remains useful on imbalanced datasets because it incorporates all four confusion matrix outcomes: true positives, true negatives, false positives, and false negatives.
The calculator also derives supporting metrics such as precision, recall, specificity, balanced accuracy, F1 score, informedness, markedness, prevalence, and error rates for broader model evaluation.
How to use
- Enter the four confusion matrix counts: TP, TN, FP, and FN.
- Choose how many decimal places you want in the output.
- Click Calculate MCC to show the results above the form.
- Review MCC first, then compare precision, recall, specificity, and balanced accuracy.
- Use CSV or PDF export to save model evaluation records for reporting or benchmarking.
Example confusion matrix and output
| TP | TN | FP | FN | Total | Accuracy | Precision | Recall | Specificity | F1 | Balanced Accuracy | MCC |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 86 | 140 | 12 | 18 | 256 | 88.28% | 87.76% | 82.69% | 92.11% | 85.15% | 87.40% | 0.7558 |
This sample shows strong overall classifier agreement with a positive MCC and well-balanced sensitivity and specificity.
Frequently asked questions
1. What does MCC measure?
MCC measures how strongly predicted classes align with actual classes. It uses all confusion matrix cells, making it more balanced than accuracy alone.
2. Why is MCC useful for imbalanced data?
Imbalanced datasets can make accuracy look strong even when minority detection is poor. MCC reduces that distortion by accounting for TP, TN, FP, and FN together.
3. What is a good MCC value?
Values near 1 indicate strong positive agreement. Values near 0 suggest random-like behavior. Negative values mean predictions are often opposite to the real labels.
4. Can MCC be undefined?
Yes. MCC becomes undefined when the denominator contains a zero term. This usually happens when a class or prediction direction is completely missing.
5. Is MCC better than F1 score?
They answer different questions. F1 focuses on precision and recall, while MCC includes true negatives too. MCC usually gives a more complete view of binary performance.
6. Does this calculator support multiclass models?
This page is designed for binary classification confusion matrices. Multiclass MCC needs an expanded formulation or one-vs-rest evaluation per class.
7. What inputs do I need?
You only need true positives, true negatives, false positives, and false negatives. These values usually come from validation, test, or thresholded prediction results.
8. Can I save the result for reports?
Yes. Use the CSV export for spreadsheets and the PDF export for printable summaries, benchmarking packets, or model governance documentation.