Calculator
Example data table
This sample is symmetric and positive definite. It commonly appears in discretized diffusion and Poisson-like models.
| Matrix A | Vector b | Expected solution x |
|---|---|---|
| 4 1 0 1 3 1 0 1 2 |
6 10 8 |
1 2 3 |
Formula used
The solver targets the linear system A x = b, where A is symmetric positive definite. It iteratively reduces the residual rₖ = b − A xₖ.
- Preconditioned residual: zₖ = M⁻¹ rₖ, where M = diag(A) for Jacobi.
- Search direction: pₖ = zₖ + βₖ pₖ₋₁
- Step size: αₖ = (rₖᵀ zₖ) / (pₖᵀ A pₖ)
- Update: xₖ₋₁ = xₖ + αₖ pₖ
- Residual update: rₖ₋₁ = rₖ − αₖ A pₖ
- Direction scaling: βₖ = (rₖ₋₁ᵀ zₖ₋₁) / (rₖᵀ zₖ)
The energy column logs 0.5·xᵀAx − bᵀx. It often decreases for well-behaved systems.
How to use this calculator
- Enter the square matrix A using one row per line.
- Enter vector b with matching length n.
- Optional: set x₀, tolerance, and max iterations.
- Choose Jacobi preconditioning for tougher systems.
- Click Solve system to compute x and diagnostics.
- Download CSV or PDF after a successful run.
Professional notes on conjugate gradient workflows
1) Physics systems that match conjugate gradient
Conjugate gradient solves A x = b efficiently when A is symmetric and positive definite. In physics this covers Poisson pressure projection, diffusion and heat conduction, many linear elasticity steps, and implicit time integrators that produce SPD operators.
2) Scale and sparsity you see in real models
Grid and mesh discretizations create sparse matrices. A 256×256 2D grid has 65,536 unknowns, and a five-point Laplacian stores about 5 nonzeros per row. A 128³ 3D grid reaches 2,097,152 unknowns, often with 7–27 nonzeros per row depending on the stencil.
3) What one iteration costs
Each CG iteration is dominated by a matrix–vector multiply plus a few dot products and vector updates. For sparse operators, the work is O(nnz), and performance is usually bandwidth-limited rather than compute-limited. That makes CG attractive for CPU and GPU physics pipelines.
4) Conditioning explains most convergence behavior
Convergence speed depends on the condition number κ(A). As meshes refine, κ(A) often grows and iterations rise. A useful rule is that iterations scale with about √κ(A) for many SPD problems, so preconditioning becomes important as resolution and coefficient contrast increase.
5) Jacobi preconditioning as a baseline
Jacobi uses M = diag(A) and applies M⁻¹ cheaply, costing O(n) per iteration. It helps when diagonal entries reflect local stiffness or material variation, and it is a safe first upgrade before moving to stronger methods.
6) Stopping rules that align with accuracy needs
Relative stopping (||r|| ≤ tol·||b||) is common because it adapts to problem scale. Tolerances near 1e-6 are typical for quick checks, while 1e-8 to 1e-10 are used for higher fidelity. Avoid over-solving when discretization error dominates your model uncertainty.
7) Diagnostics that reveal solver health
The residual and relative residual show progress, while α and β reflect step geometry. The logged energy 0.5·xᵀAx − bᵀx often trends downward for clean SPD inputs. Persistent oscillations or stalls can indicate poor scaling, coefficient extremes, or a non-SPD matrix.
8) Exportable data supports reproducible science
Use CSV to plot convergence curves, compare preconditioners, and archive solver settings alongside experiments. The PDF snapshot is useful for sharing a run summary with collaborators. Recording A, b, tolerance, and iteration history makes numerical results easier to verify and repeat.
FAQs
1) What kinds of problems should use conjugate gradient?
Use it for symmetric positive definite systems, common in diffusion, Poisson, and many elasticity formulations. It is especially effective for sparse matrices from grid or mesh discretizations.
2) Why does symmetry and positive definiteness matter?
CG relies on orthogonality properties and energy minimization that hold for SPD matrices. If A is not SPD, convergence can slow, become unstable, or fail due to breakdown.
3) How should I choose a tolerance?
Start with 1e-6 for exploratory work and 1e-8 to 1e-10 for production accuracy. Use relative stopping when the right-hand side sets the problem scale.
4) When should I enable Jacobi preconditioning?
Enable it when diagonal dominance is present or residual reduction is slow. Jacobi is cheap and can meaningfully reduce iterations for many finite difference operators.
5) What does the “energy” column tell me?
It tracks 0.5 xᵀAx − bᵀx, a quadratic objective for SPD systems. A mostly decreasing trend suggests stable progress and consistent inputs.
6) Why do I see little improvement after many iterations?
Stalling can come from poor scaling, a high condition number, or a non-SPD matrix. Try Jacobi, re-scale variables, verify symmetry, or use a different solver class.
7) Can this tool handle very large sparse matrices?
This web form is best for small to moderate examples and validation. For large sparse systems, integrate CG with sparse storage and optimized operators in your simulation code.