Tune psi functions, scale estimates, and stopping rules. Review coefficients, diagnostics, and weighted residual patterns. Make better decisions when ordinary fitting gets distorted easily.
This sample contains a visible high-value point, which helps demonstrate how robust fitting can resist distortion better than ordinary least squares.
| Point | X | Y |
|---|---|---|
| 1 | 1 | 2.1 |
| 2 | 2 | 4.2 |
| 3 | 3 | 6.0 |
| 4 | 4 | 8.1 |
| 5 | 5 | 10.2 |
| 6 | 6 | 12.1 |
| 7 | 7 | 14.0 |
| 8 | 8 | 16.3 |
| 9 | 9 | 27.8 |
| 10 | 10 | 20.1 |
Model: yi = β0 + β1xi + εi
Residual: ri = yi - ŷi
Robust scale: s = median(|ri - median(r)|) / 0.6745
Standardized residual: ui = ri / s
M-estimator objective: minimize Σρ(ui)
IRLS weight: wi = ψ(ui) / ui
Weighted update: solve the weighted least-squares line using the current wi values, then repeat until coefficients stabilize.
Huber: keeps full weight for small residuals and clips extreme influence.
Tukey biweight: strongly suppresses far outliers and can reduce their weight to zero.
Cauchy: smoothly reduces leverage as residual size increases.
Welsch: aggressively downweights large deviations with an exponential rule.
It fits a regression line while reducing the effect of unusual observations. Instead of treating every residual equally, it reweights points according to the chosen loss function.
Ordinary least squares can tilt heavily toward outliers. A robust fit usually stays closer to the main data pattern when a few points are extreme, miscoded, or highly influential.
The tuning constant determines how quickly a method starts downweighting residuals. Smaller values resist outliers more strongly, while larger values behave more like ordinary least squares.
Huber is a balanced default. Tukey is stronger against outliers. Cauchy and Welsch provide smoother damping. A good choice depends on whether you expect mild contamination or severe extreme points.
Flagged points are observations with large standardized residuals or low final weights. They are not automatically wrong, but they deserve inspection because they affect fit stability.
A large difference usually means some observations have strong leverage or unusually large residuals. Robust fitting dampens that influence, so the final slope and intercept may shift toward the central trend.
This implementation handles one predictor and one response at a time. It is designed for simple linear robust regression with optional intercept control and detailed diagnostic output.
It summarizes how much variation the fitted line explains using the final robust predictions. It is useful for comparison, but robust methods should also be judged with weights, residuals, and scale.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.