Or so we tell you.
Regularization: the art of punishing your model for being too smug.
When your model is overfitting, you need to add some noise to the mix.
But not just any kind of noise, no, no.
We're talking about L1, L2, and L1-L2-Hybrid regularization.
L1: "I'm going to take away some of your coefficients, you overfitting brute!"
L2: "I'm going to make you pay for every single parameter, you overfitting fiend!"
L1-L2-Hybrid: "I'm going to make you pay for all your coefficients, but only if you're feeling guilty."
Try our penalization subpage for more info.
Or, if you're feeling adventurous, visit our regularization_station subpage.
Where the regularization is strong, the models are weak.
Back to Overfitting Techniques.