Groups
Category
Sharpness-Aware Minimization (SAM) trains models to perform well even when their weights are slightly perturbed, seeking flatter minima that generalize better.
Cross-entropy loss measures how well predicted probabilities match the true labels by penalizing confident wrong predictions heavily.