Groups
Category
The kernel (lazy) regime keeps neural network parameters close to their initialization, making training equivalent to kernel regression with a fixed kernel such as the Neural Tangent Kernel (NTK).
Deep learning generalization theory tries to explain why overparameterized networks can fit (interpolate) training data yet still perform well on new data.