Analyze class precision across imbalanced machine learning labels. Compare per class values before averaging results. Download tables quickly for validation, review, and documentation workflows.
| Class | TP | FP | Precision |
|---|---|---|---|
| Cat | 42 | 8 | 0.8400 |
| Dog | 35 | 10 | 0.7778 |
| Bird | 18 | 6 | 0.7500 |
Example macro average precision = (0.8400 + 0.7778 + 0.7500) / 3 = 0.7893
Precision for each class: Precisioni = TPi / (TPi + FPi)
Macro Average Precision: Macro Precision = (P1 + P2 + ... + Pk) / k
This method gives each class equal weight. It does not let large classes dominate the final average.
Macro average precision is a useful metric for multiclass classification. It measures how precise a model is for every class. Then it averages those values equally. This makes it helpful when class sizes are uneven. A large class cannot hide weak precision in a small class.
Precision focuses on prediction quality. It answers a simple question. When the model predicts a class, how often is that prediction correct? For each class, you divide true positives by predicted positives. Predicted positives are the sum of true positives and false positives. After that, you average the class precision values.
Macro averaging treats every class the same. That is important in AI and machine learning projects with imbalance. Fraud detection, medical labeling, document routing, and intent classification often have rare classes. A model may look strong overall but still perform poorly on important minority classes. Macro average precision helps expose that issue.
This calculator lets you enter class wise true positives and false positives. It then calculates precision for each label and the final macro average precision. You can also control how zero division is handled. That is useful when a class has no predicted positives. Some teams count it as zero. Others exclude it for reporting.
Use this metric during model evaluation, threshold tuning, and error analysis. It works well beside recall, F1 score, and confusion matrix review. It is especially valuable when stakeholders care about fairness across classes. It is also helpful for comparing models that behave differently on rare labels.
A higher macro average precision means cleaner predictions across classes. Still, it should not be used alone. Always check recall and class support too. A complete evaluation gives better model insight and better deployment decisions.
It measures the average of class wise precision values in a multiclass problem. Every class contributes equally, even when some classes have many more samples than others.
Macro precision averages class precision scores equally. Micro precision combines all true positives and false positives first. Macro highlights minority class behavior, while micro is influenced more by larger classes.
It prevents large classes from dominating the final score. That makes it easier to see whether the model is still precise on rare or underrepresented labels.
That means the model never predicted that class. This calculator lets you count that case as zero, count it as one, or ignore it from the average.
Higher values usually mean cleaner positive predictions across classes. Still, you should also review recall, F1 score, support, and confusion patterns before making a final model decision.
Yes. The calculator accepts decimal values. That can help when you work with weighted counts, averaged folds, or summarized evaluation outputs from multiple experiments.
Export the table when you need audit records, experiment logs, stakeholder reports, or validation notes. It keeps the class wise precision breakdown easy to review later.
No. It is best used with recall, F1 score, class support, and confusion matrix analysis. Multiple metrics provide a more reliable view of real model behavior.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.