Estimate sparse coefficients from matrix-based training data. Adjust lambda, iterations, scaling, and intercept handling easily. Visualize fit quality, residual behavior, exports, and coefficient sparsity.
This sample is also available through the “Load Example Data” button.
| Row | StudyHours | PracticeSets | Distractions | RevisionDays | Target y |
|---|---|---|---|---|---|
| 1 | 1 | 2 | 0 | 3 | 13.1 |
| 2 | 2 | 1 | 1 | 4 | 15.2 |
| 3 | 3 | 3 | 0 | 5 | 18.0 |
| 4 | 4 | 5 | 2 | 6 | 20.7 |
| 5 | 5 | 4 | 1 | 7 | 22.4 |
| 6 | 6 | 6 | 3 | 8 | 25.6 |
| 7 | 7 | 7 | 2 | 9 | 27.9 |
| 8 | 8 | 5 | 4 | 10 | 29.8 |
This calculator solves the lasso objective with coordinate descent. Lasso adds an L1 penalty to ordinary least squares, which shrinks weak coefficients and can force some exactly to zero.
Here, λ controls the penalty strength. Larger λ values create a sparser model, while smaller values behave more like ordinary least squares.
Lasso regression fits a linear model while penalizing the absolute size of coefficients. This penalty shrinks weak terms and can set some coefficients exactly to zero, making the model easier to interpret.
Lambda controls how aggressively the model shrinks coefficients. A small value keeps more variables active. A large value increases sparsity and can simplify the model, but too much shrinkage may reduce predictive accuracy.
Usually yes. Standardization is helpful when variables use different units or ranges. Without scaling, features with larger magnitudes can dominate the penalty behavior and distort coefficient comparisons.
That is a normal lasso outcome. The L1 penalty performs soft-thresholding, which removes weak predictors by shrinking them to zero. Those variables are treated as inactive in the final model.
Yes, but it often chooses one variable from a correlated group and shrinks the others strongly. This is useful for simplification, though the selected feature may change when lambda or the dataset changes.
Increase the maximum iterations, slightly relax the tolerance, or standardize the features. Extremely large penalties, highly collinear inputs, or badly scaled data can slow coordinate descent.
Disable the intercept only when you know your data is already centered appropriately or your modelling setup requires the line to pass through the origin. Most practical datasets should keep the intercept enabled.
The CSV and PDF exports include settings, summary metrics, coefficient details, and all prediction rows. This makes it easier to document model runs, share findings, and keep an audit trail.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.