Groups
Category
Sequence-to-sequence with attention lets a decoder focus on the most relevant parts of the input at each output step, rather than compressing everything into a single vector.
Neural Collapse describes what happens at the end of training: the penultimate-layer features of each class concentrate tightly around a class mean.