Groups
Category
The Lottery Ticket Hypothesis (LTH) says that inside a large dense neural network there exist small sparse subnetworks that, when trained in isolation from their original initialization, can reach comparable accuracy to the full model.
Neural network expressivity studies what kinds of functions different network architectures can represent and how efficiently they can do so.