Symmetry, equivariance, and group theory foundations for invariant neural networks on non-Euclidean domains.
8 concepts
Group theory gives a precise language for symmetries, and neural networks can exploit these symmetries to learn faster and generalize better.
Equivariance means that applying a transformation before a function is the same as applying a corresponding transformation after the function.
Group convolution combines two functions defined on a group by summing over products aligned by the group operation, generalizing the usual circular convolution on integers modulo n.
Message passing treats meshes and point clouds as graphs where nodes exchange information with neighbors to learn useful features.
Gauge equivariant networks are neural networks that respect local symmetries (gauges) on manifolds, such as how vectors rotate when you change the local reference frame on a surface.
E(n)-equivariant neural networks are models whose outputs transform predictably when inputs are rotated, translated, or reflected in n-dimensional Euclidean space.