Groups
Category
The Information Bottleneck (IB) principle formalizes learning compact representations T that keep only the information about X that is useful for predicting Y.
Generalization bounds explain why deep neural networks can perform well on unseen data despite having many parameters.