Groups
Category
Data augmentation expands the training distribution by applying label-preserving transformations to inputs, which lowers overfitting and improves generalization.
The biasโvariance tradeoff explains how prediction error splits into bias squared, variance, and irreducible noise for squared loss.