Groups
Category
The Weak Law of Large Numbers (WLLN) says that the sample average of independent, identically distributed (i.i.d.) random variables with finite mean gets close to the true mean with high probability as the sample size grows.
Generalization bounds explain why deep neural networks can perform well on unseen data despite having many parameters.