Groups
Category
E(n)-equivariant neural networks are models whose outputs transform predictably when inputs are rotated, translated, or reflected in n-dimensional Euclidean space.
Transformers are permutation-invariant by default, so they need positional encodings to understand word order in sequences.