Calculate Binary Cross Entropy Online

Analyze binary classification error using single values or batches. Export reports and compare prediction behavior. Understand every entry before tuning thresholds or retraining models.

Binary Cross Entropy Calculator

This calculator measures how well predicted probabilities match binary labels. It accepts single values or full batches. You can evaluate average loss, weighted loss, and classification accuracy from one clean page.

Use labels as 0 or 1. Enter predicted probabilities between 0 and 1. Optional sample weights let you emphasize specific rows, and positive class weight helps when one class matters more.

Formula Used

For each row, the binary cross entropy loss is:

L = -[pos_weight × y × log(p) + (1 - y) × log(1 - p)]

Here, y is the true label, p is the predicted probability after epsilon clamping, and pos_weight scales the positive class term.

If sample weights are provided, the weighted row loss is:

Weighted Row Loss = sample_weight × L

The weighted mean reported by this page is:

Weighted Mean BCE = Σ(sample_weight × L) / Σ(sample_weight)

How to Use This Calculator

  1. Enter true labels as 0 and 1 values.
  2. Enter predicted probabilities in the same order.
  3. Add sample weights if you want weighted evaluation.
  4. Set threshold, epsilon, positive class weight, and log base.
  5. Choose the primary reported result type.
  6. Click the calculate button.
  7. Review the summary, detailed table, and graph.
  8. Download the result set as CSV or PDF when needed.

Example Data Table

# True Label Prediction Sample Weight
110.911.00
200.141.00
310.771.50
410.631.00
500.220.80

You can load these same values into the form by pressing the example button above.

Frequently Asked Questions

1. What does binary cross entropy measure?

It measures the difference between true binary labels and predicted probabilities. Lower values indicate better probability estimates and better model calibration for binary classification tasks.

2. Why must labels be only 0 or 1?

Standard binary cross entropy is designed for binary outcomes. This calculator follows that rule, so labels must represent negative and positive classes directly.

3. Why is epsilon clamping used?

Clamping prevents logarithms of 0, which are undefined. It keeps the calculation stable when predictions are exactly 0 or exactly 1.

4. What does positive class weight do?

It increases the penalty on mistakes involving positive labels. This is useful when positive outcomes are rarer or more important in your dataset.

5. What are sample weights for?

Sample weights let you give certain rows more influence. They are helpful when observations differ in reliability, importance, or frequency.

6. What does the threshold affect?

The threshold does not change the loss formula itself. It only changes the predicted class used for the accuracy value shown in the summary.

7. Which reported result should I choose?

Choose weighted mean for normalized comparison, weighted sum for total penalty, and per-sample table when you want to inspect each observation individually.

8. Can I use this for batch evaluation?

Yes. Paste lists of labels and probabilities separated by commas, spaces, or line breaks. The calculator processes all rows together and exports the results.

Related Calculators

joint probability calculatorhuffman coding calculatormaximum likelihood estimate calculatorconditional probability calculatorinformation content calculatorlog likelihood ratio calculatorshannon fano coding calculatorrun length encoding calculatorgolomb coding calculatorrelative entropy calculator

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.