Groups
Category
Sequence-to-sequence with attention lets a decoder focus on the most relevant parts of the input at each output step, rather than compressing everything into a single vector.
CTC loss trains sequence models when you do not know the alignment between inputs (frames) and outputs (labels).