Groups
Category
Neural network expressivity studies what kinds of functions different network architectures can represent and how efficiently they can do so.
The Universal Approximation Theorem (UAT) says a feedforward neural network with one hidden layer and a non-polynomial activation (like sigmoid or ReLU) can approximate any continuous function on a compact set as closely as we want.