Solve quadratic models using gradients, Hessians, and updates. Review paths and classify stationary points clearly. Export polished results for deeper analysis and confident reporting.
Optimize the quadratic function f(x,y) = ax² + by² + cxy + dx + ey + f using gradient descent, momentum, or Newton updates.
| Function | Start Point | Method | Expected Stationary Point | Classification |
|---|---|---|---|---|
| f(x,y) = x² + y² - 6x + 4y + 5 | (8, -8) | Gradient Descent | (3, -2) | Local minimum |
| Set a=1, b=1, c=0, d=-6, e=4, f=5 | Use α=0.10, tol=0.000001 | Newton Method | Converges in one damped step with damping 1 | Strictly convex surface |
This calculator works with the quadratic objective
f(x,y) = ax² + by² + cxy + dx + ey + f.
The gradient is
∇f(x,y) = [2ax + cy + d, 2by + cx + e].
The Hessian matrix is
H = [[2a, c], [c, 2b]].
It stays constant for quadratic objectives.
The stationary point solves
H [x, y]^T = -[d, e]^T,
provided det(H) ≠ 0.
The second-order classification uses the Hessian determinant:
if det(H) > 0 and 2a > 0, the point is a local minimum.
If det(H) > 0 and 2a < 0, it is a local maximum.
If det(H) < 0, the point is a saddle point.
Update rules:
It optimizes a two-variable quadratic objective function. The page computes gradients, Hessian properties, stationary points, convergence history, and the final numerical estimate from your chosen method.
The Hessian measures curvature. Its determinant, trace, and eigenvalues help classify the stationary point and reveal whether the surface is convex, concave, saddle-shaped, or ill-conditioned.
Use gradient descent when you want a simple, controlled iterative path. It is easy to tune, but it may require many iterations if the curvature is steep in one direction.
Momentum carries part of the previous step forward. This can reduce zigzagging and often speeds movement across narrow valleys, especially when the objective is elongated.
Newton uses second-order curvature information. For a well-behaved quadratic with an invertible Hessian, it can reach the stationary point in very few steps, sometimes one.
The condition number estimates sensitivity. A large value means the surface is stretched or nearly singular, which can make iterative progress slower and numerical updates less stable.
Divergence usually means the learning rate is too large, the Hessian is singular for Newton updates, or the chosen method is unstable for the supplied curvature and starting point.
No. This implementation is designed for quadratic unconstrained optimization in two variables. That restriction allows exact Hessian analysis, stationary classification, and a clear contour plot.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.