Calculator inputs
Enter a counts matrix or a joint probability matrix. The form uses a responsive 3-column, 2-column, and 1-column layout.
Example data table
This example uses counts for three X states and three Y states. The calculator converts counts into a joint probability distribution before measuring entropy.
| State | Y1 | Y2 | Y3 |
|---|---|---|---|
| X1 | 12 | 8 | 5 |
| X2 | 7 | 15 | 6 |
| X3 | 4 | 9 | 14 |
Formula used
Joint entropy measures uncertainty across paired outcomes. The calculator first builds a valid joint distribution, then evaluates the formulas below with your chosen log base.
Joint entropy: H(X,Y) = -Σi Σj p(xi, yj) logb(p(xi, yj))
Marginals: P(X = xi) = Σj p(xi, yj) and P(Y = yj) = Σi p(xi, yj)
Conditionals: H(X|Y) = H(X,Y) - H(Y), and H(Y|X) = H(X,Y) - H(X)
Mutual information: I(X;Y) = H(X) + H(Y) - H(X,Y)
Any cell with probability zero contributes zero to the total because lim p log(p) approaches zero as p approaches zero.
How to use this calculator
1. Choose the matrix size
Select how many X states and Y states you want to model. The input grid updates instantly when dimensions change.
2. Pick counts or probabilities
Use counts for raw observations or probabilities for a ready distribution. Probability input should sum to 1 unless renormalization is enabled.
3. Set the log base
Base 2 returns results in bits, the natural log returns nats, and base 10 returns bans. A custom base is also available.
4. Enter matrix values
Fill every visible cell with non-negative values. Blank cells default to zero, which is useful for sparse distributions.
5. Calculate and review
Submit the form to display the result above the calculator. Review entropy, marginals, conditionals, mutual information, and the chart.
6. Export the output
Use the CSV button for structured data or the PDF button to save a clean report of the result section.
Frequently asked questions
1. What does joint entropy measure?
Joint entropy measures uncertainty across two variables at the same time. It becomes larger when outcomes are more spread out across many paired states.
2. What is the difference between counts and probabilities?
Counts are raw observations. Probabilities already represent normalized shares. When you enter counts, the calculator converts them into probabilities before evaluating entropy formulas.
3. Which log base should I use?
Use base 2 for bits, the natural log for nats, and base 10 for bans. Choose the base that matches your analysis standard.
4. Can the matrix contain zeros?
Yes. Zero entries are valid and simply add no entropy contribution. They are useful when certain paired outcomes are impossible or unobserved.
5. Why is mutual information shown too?
Mutual information reveals how much one variable tells you about the other. It complements joint entropy by highlighting dependence rather than raw uncertainty alone.
6. What does normalized joint entropy mean?
Normalized joint entropy divides observed joint entropy by the maximum possible entropy for the same matrix size. This produces a scale between zero and one.
7. Why would probability inputs need renormalization?
Manual entries can sum to slightly above or below one due to rounding or data entry errors. Renormalization rescales them into a valid probability distribution.
8. Does a larger matrix always mean larger entropy?
Not always. A larger matrix allows more possible states, but entropy only grows when probability mass is actually spread across those states.