Groups
Category
Early stopping halts training when the validation loss stops improving, preventing overfitting and saving compute.
The kernel (lazy) regime keeps neural network parameters close to their initialization, making training equivalent to kernel regression with a fixed kernel such as the Neural Tangent Kernel (NTK).