Regularization is a way to prevent overfitting in machine learning models.
Imagine trying to draw a smooth curve through points. Without regularization, you might end up drawing a super wiggly line that fits every point exactly. With regularization, you're encouraged to draw a simpler, smoother line.
Adds the sum of the absolute values of the coefficients to the loss.
Formula:
Loss=Original Loss+λ∑∣wi∣Loss = Original\ Loss + \lambda \sum |w_i|
Tends to make some coefficients exactly zero → helps with feature selection.
Imagine a dataset with 100 features, but only 10 really matter. L1 can shrink the other 90 coefficients to 0, simplifying the model.
Adds the sum of the squares of the coefficients to the loss.
Formula:
Loss=Original Loss+λ∑wi2Loss = Original\ Loss + \lambda \sum w_i^2