Enter a matrix, then find long run behavior. Use linear solving or iterative convergence checks. Download tables, share outputs, and verify row sums fast.
For a Markov chain with transition matrix P, the stationary distribution π is a probability row vector that satisfies:
This page computes π in two ways: (1) a linear system built from P, plus a sum constraint, and (2) power iteration πk+1 = πkP until convergence.
This example uses three states. It is symmetric around state S2, so the long run probability peaks at S2.
| To S1 | To S2 | To S3 | |
|---|---|---|---|
| From S1 | 0.5 | 0.5 | 0 |
| From S2 | 0.25 | 0.5 | 0.25 |
| From S3 | 0 | 0.5 | 0.5 |
Expected stationary distribution for this matrix is approximately: [0.25, 0.50, 0.25].
A finite Markov chain models state-to-state changes using probabilities. The stationary distribution describes the long run fraction of time the system spends in each state, after many transitions. In practical terms, it is the equilibrium probability profile of repeated random updates.
The input matrix must be row-stochastic: every entry is nonnegative, and each row sums to one. A row represents the probability of leaving a state and landing in each destination state. If a row sum is off, the calculator rejects it to protect physical interpretation.
The stationary vector π satisfies π = πP with Σπi=1. When the chain is irreducible and aperiodic, this solution is unique and represents the limiting distribution. Reducible chains can have multiple stationary solutions, depending on communicating classes.
The linear approach rewrites the condition as (PT - I)πT = 0, then replaces one equation with the normalization constraint. This yields a solvable system of n equations. It is fast for small matrices and provides a direct solution up to rounding error.
Power iteration updates a probability row vector using πk+1 = πkP. Starting from a uniform distribution, the sequence often converges to the stationary distribution for well-behaved chains. Slow mixing chains may need higher iteration limits or tighter tolerances.
The calculator reports an L1 residual measuring how close the output is to satisfying π=πP. A small residual indicates numerical consistency. If the iterative residual stays large, the chain may be periodic, nearly decomposable, or the tolerance may be too strict for the chosen iteration cap.
Each probability can be read as a long-run occupancy rate. For example, π2=0.50 suggests the system spends about 50% of steps in state S2 in steady operation. Converting to percentages helps compare dominant states and identify bottlenecks or preferred modes.
Stationary Markov models appear across physics: random walks on lattices, hopping transport, discrete diffusion, Monte Carlo sampling, and coarse-grained energy-state transitions. In materials and statistical physics, the stationary distribution approximates equilibrium populations. In networked systems, it summarizes steady traffic through nodes or configurations under probabilistic rules.
Each row must represent a valid probability distribution. Use the “Normalize rows” button, then recheck for negative values. If a row sum is zero, revise the row because it cannot be normalized meaningfully.
Small differences come from rounding and stopping criteria. The linear solve is direct, while iteration stops when change falls below tolerance. If the chain is reducible or periodic, iteration may not settle cleanly.
It measures how close the computed vector is to satisfying π=πP. Values near zero indicate a consistent stationary solution. Larger values suggest insufficient iterations, numerical sensitivity, or a chain structure that complicates convergence.
This implementation supports 2 to 10 states for clarity and stability. For larger chains, use specialized numerical libraries or sparse methods, then validate results with a residual check like the one shown here.
Uniqueness is guaranteed when the chain is irreducible and aperiodic. In that case, the long-run distribution does not depend on the starting state. Reducible chains can have multiple stationary distributions.
Start with 1e-10 for stable chains. If convergence is slow, relax to 1e-8 or increase max iterations. Always compare the final residual: it is a better indicator than iteration count alone.
This calculator targets discrete-time chains. For continuous-time Markov processes, first build a discrete transition matrix from a time step, or solve the steady-state of a rate matrix with appropriate normalization.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.