1) What does simulated annealing solve?
It searches for a low objective value when a function has many local minima. The method balances random exploration with gradual cooling to improve the chance of finding a near-global minimum.
Tune parameters, inspect convergence, and study search paths. Visualize temperature decay and objective improvement clearly. Solve hard landscapes with flexible controls and exportable reports.
This solver minimizes benchmark objective functions using probabilistic uphill acceptance, multiple cooling schedules, iteration tracking, CSV export, PDF export, and Plotly visualization.
Use benchmark functions, choose bounds, set thermal controls, and run a configurable simulated annealing search.
Sample optimization history for a two-variable Rastrigin search. Values below are illustrative and help users understand output structure.
| Iteration | Temperature | Current Objective | Best Objective | Accepted | x1 | x2 |
|---|---|---|---|---|---|---|
| 1 | 100.0000 | 47.2861 | 47.2861 | Yes | 4.2000 | -3.5000 |
| 25 | 92.0000 | 21.6418 | 18.5924 | Yes | 2.1700 | -1.1900 |
| 70 | 77.8707 | 9.5532 | 7.1089 | No | 1.0050 | -0.8800 |
| 160 | 55.2212 | 2.7413 | 1.9986 | Yes | 0.3100 | -0.0700 |
| 310 | 28.3886 | 0.6930 | 0.4204 | Yes | 0.0580 | -0.0210 |
| 480 | 13.4124 | 0.1288 | 0.0261 | Yes | 0.0100 | -0.0040 |
For each variable, the calculator proposes a nearby candidate using:
x′i = clamp(xi + ui, lower, upper)
where each random offset ui is sampled from [−step size, +step size].
The objective function value acts like energy in annealing:
Δ = f(x′) − f(x)
If Δ ≤ 0, the new point is always accepted because it improves or matches the current solution.
When the candidate is worse, the method may still accept it to escape local minima:
P(accept) = exp(−Δ / T)
Here, T is the current temperature. Higher temperatures allow more uphill moves.
Exponential: Tk+1 = αTk
Linear: Tk+1 = max(Tk − α, Tmin)
Logarithmic: Tk = T0 / ln(2 + αk)
It searches for a low objective value when a function has many local minima. The method balances random exploration with gradual cooling to improve the chance of finding a near-global minimum.
Accepting some worse moves prevents the search from getting trapped too early. At higher temperatures, uphill moves are more likely, which improves exploration across rough landscapes.
A higher initial temperature increases exploration and acceptance of uphill moves. Start large enough to allow movement, then reduce gradually until the algorithm settles around strong candidates.
Large steps explore widely but may overshoot good areas. Small steps refine local neighborhoods but can slow global exploration. Good performance usually needs a balanced step size.
Exponential cooling is a practical default because it is simple and stable. It usually works well when you want smooth, predictable temperature decay with limited tuning effort.
The stall limit stops the run after many non-improving iterations. It saves time when the search has effectively converged or when the chosen settings no longer generate useful progress.
Yes. Enter the same random seed and keep all settings unchanged. That makes the pseudo-random sequence repeat, so the optimization path becomes reproducible.
For two-variable problems, it plots the objective surface and overlays the search path. This helps you see where the algorithm wandered, improved, and finally settled.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.