Groups
Category
Level
Early stopping halts training when the validation loss stops improving, preventing overfitting and saving compute.
Grokking is when a model suddenly starts to generalize well long after it has already memorized the training set.
BellmanโFord finds single-source shortest paths even when some edge weights are negative.