Calculator Inputs
Enter confusion matrix counts and optional labels.
Example Data Table
| Scenario | TP | TN | FP | FN | MCC | Accuracy |
|---|---|---|---|---|---|---|
| Perfect model | 48 | 50 | 0 | 0 | 1.0000 | 100.00% |
| Balanced strong model | 42 | 40 | 8 | 10 | 0.6405 | 82.00% |
| Moderate model | 30 | 35 | 15 | 20 | 0.3015 | 65.00% |
| Weak model | 18 | 22 | 20 | 25 | -0.0579 | 47.06% |
Formula Used
MCC = (TP × TN − FP × FN) / √((TP + FP)(TP + FN)(TN + FP)(TN + FN))
The Matthews correlation coefficient summarizes binary classification quality in one balanced score. It ranges from -1 to +1. A value near +1 shows strong agreement. A value near 0 suggests random-like behavior. A negative value signals systematic disagreement.
When any denominator margin becomes zero, MCC is undefined. That happens when a model predicts only one class, or when the observed data contains only one class.
How to Use This Calculator
- Enter the model name for reporting clarity.
- Set the positive and negative class labels.
- Fill in TP, TN, FP, and FN counts.
- Press Calculate MCC to view the full results.
- Review MCC beside precision, recall, specificity, and F1 score.
- Use the graph to compare metric balance quickly.
- Export the summary as CSV or PDF when needed.
Frequently Asked Questions
1. What does MCC measure?
MCC measures agreement between predicted and actual binary classes. It uses all four confusion matrix cells. That makes it more balanced than accuracy when classes are uneven.
2. Why is MCC better than accuracy sometimes?
Accuracy can look strong on imbalanced data. MCC punishes models that ignore minority cases. It reveals whether predictions are balanced across both positive and negative outcomes.
3. What is a good MCC value?
Values near 1 are excellent. Values around 0 show weak or random-like performance. Negative values mean the model tends to disagree with the actual labels.
4. Can MCC be negative?
Yes. A negative MCC means the prediction pattern is worse than random alignment. It often suggests reversed decision logic, poor thresholds, or severely misleading signals.
5. When is MCC undefined?
MCC becomes undefined when one denominator margin equals zero. This usually happens when predictions contain only one class, or the observed data contains only one class.
6. Does MCC work with imbalanced datasets?
Yes. MCC is widely preferred for imbalanced binary classification. It combines TP, TN, FP, and FN into one score, so it reflects both classes fairly.
7. Should I use MCC with F1 score?
Yes. F1 focuses on positive class precision and recall. MCC adds a fuller view because it also includes true negatives and false positives in one balanced measure.
8. What inputs do I need for this calculator?
You need four confusion matrix counts: true positives, true negatives, false positives, and false negatives. Optional class labels make the output easier to read.