Fit data with least squares for best estimates. Choose polynomial degree and review residual metrics. Export results as CSV or PDF for reporting fast.
This sample is slightly noisy, so the fit is not perfect.
| x | y | w |
|---|---|---|
| 0 | 1.00 | 1 |
| 1 | 2.00 | 1 |
| 2 | 2.80 | 1 |
| 3 | 4.10 | 1 |
| 4 | 5.20 | 1 |
We fit a polynomial model of degree d:
y ≈ c₀ + c₁x + c₂x² + … + c_d x^d
Let A be the design matrix and c the coefficient vector. Least squares minimizes:
min ‖W^(1/2)(Ac − y)‖²
This yields the normal equations:
(AᵀWA)c = AᵀWy
Least squares begins with paired observations of x and y, representing measured inputs and outcomes. The calculator accepts many rows, supports quick paste, and validates numeric entries. Empty rows are ignored. When weights are enabled, each point receives a positive w, letting high confidence measurements influence the fit more strongly than noisy samples, such as sensor readings with known variance.
Choose a polynomial degree that matches the signal complexity. Degree 1 estimates a straight line, degree 2 adds curvature, and higher degrees can chase small fluctuations. A practical workflow is to start low, review RMSE and residual patterns, then increase degree only if residuals remain structured. The calculator allows degrees 1 through 10 and requires at least degree plus one valid points to avoid underdetermined systems.
The solver builds a Vandermonde design matrix A and finds coefficients c that minimize squared error. With weights, it minimizes ‖W^(1/2)(Ac − y)‖², leading to (AᵀWA)c = AᵀWy. Internally, weights are applied by scaling each row by √w. The linear system is solved using Gaussian elimination with partial pivoting, which improves robustness. Still, repeated x values, narrow x ranges, or very high degree can create ill conditioning and unstable coefficients.
After solving, the table shows predicted ŷ and residuals y − ŷ for every row, enabling fast diagnostics. SSE summarizes total squared error, RMSE reports typical error magnitude in y units, and weighted SSE emphasizes points with higher w. R² indicates how much variance is explained when y varies meaningfully; if y is nearly constant, R² can be uninformative. Inspect large residuals as potential outliers and compare signs to detect systematic bias or missing nonlinear structure.
Once the model looks reasonable, export results for reporting or audits. The CSV includes coefficients, metrics, and row level predictions suitable for spreadsheets. The PDF provides a summary with the polynomial expression and data lines. For reproducible workflows, keep the same degree, weights, and data ordering, and document preprocessing such as scaling, unit conversions, or filtering. If you refit after removing an outlier, record the rationale and updated metrics.
It estimates polynomial coefficients that minimize squared differences between measured y values and model predictions ŷ, optionally emphasizing points using positive weights.
Start with degree 1, then increase only if residuals show clear curvature. Higher degrees can overfit and make coefficients unstable, especially with few points.
Use weights when some measurements are more reliable than others. Larger weights pull the curve toward those points; keep weights positive and on a consistent scale.
It often means the system cannot be solved reliably due to repeated x values, too high a degree, or insufficient variation. Reduce degree or add diverse data.
R² measures explained variance relative to the mean of y. If y barely changes, R² may be misleading, so rely more on RMSE and residual inspection.
CSV exports coefficients, metrics, and row level predictions. PDF exports a formatted report containing the model, key metrics, and a compact data listing.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.