Groups
Category
Sharpness-Aware Minimization (SAM) trains models to perform well even when their weights are slightly perturbed, seeking flatter minima that generalize better.
Early stopping halts training when the validation loss stops improving, preventing overfitting and saving compute.