Matthews Correlation Coefficient Calculator

Measure balanced quality from confusion matrix counts. See MCC, accuracy, precision, recall, specificity, and F1. Export clear summaries for audits, benchmarks, and model reviews.

Calculator Inputs

Enter confusion matrix counts to evaluate binary classifier quality with a balanced correlation metric.

Formula used

Matthews Correlation Coefficient Formula

MCC = (TP × TN − FP × FN) / √[(TP + FP)(TP + FN)(TN + FP)(TN + FN)]

MCC measures the correlation between predicted and actual binary classes. Unlike accuracy, it remains useful on imbalanced datasets because it incorporates all four confusion matrix outcomes: true positives, true negatives, false positives, and false negatives.

The calculator also derives supporting metrics such as precision, recall, specificity, balanced accuracy, F1 score, informedness, markedness, prevalence, and error rates for broader model evaluation.

How to use this calculator

How to use

  1. Enter the four confusion matrix counts: TP, TN, FP, and FN.
  2. Choose how many decimal places you want in the output.
  3. Click Calculate MCC to show the results above the form.
  4. Review MCC first, then compare precision, recall, specificity, and balanced accuracy.
  5. Use CSV or PDF export to save model evaluation records for reporting or benchmarking.
Example data table

Example confusion matrix and output

TP TN FP FN Total Accuracy Precision Recall Specificity F1 Balanced Accuracy MCC
86 140 12 18 256 88.28% 87.76% 82.69% 92.11% 85.15% 87.40% 0.7558

This sample shows strong overall classifier agreement with a positive MCC and well-balanced sensitivity and specificity.

FAQs

Frequently asked questions

1. What does MCC measure?

MCC measures how strongly predicted classes align with actual classes. It uses all confusion matrix cells, making it more balanced than accuracy alone.

2. Why is MCC useful for imbalanced data?

Imbalanced datasets can make accuracy look strong even when minority detection is poor. MCC reduces that distortion by accounting for TP, TN, FP, and FN together.

3. What is a good MCC value?

Values near 1 indicate strong positive agreement. Values near 0 suggest random-like behavior. Negative values mean predictions are often opposite to the real labels.

4. Can MCC be undefined?

Yes. MCC becomes undefined when the denominator contains a zero term. This usually happens when a class or prediction direction is completely missing.

5. Is MCC better than F1 score?

They answer different questions. F1 focuses on precision and recall, while MCC includes true negatives too. MCC usually gives a more complete view of binary performance.

6. Does this calculator support multiclass models?

This page is designed for binary classification confusion matrices. Multiclass MCC needs an expanded formulation or one-vs-rest evaluation per class.

7. What inputs do I need?

You only need true positives, true negatives, false positives, and false negatives. These values usually come from validation, test, or thresholded prediction results.

8. Can I save the result for reports?

Yes. Use the CSV export for spreadsheets and the PDF export for printable summaries, benchmarking packets, or model governance documentation.

Related Calculators

area under pr curvekolmogorov smirnov statistic calculator

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.