Groups
Standard softmax attention costs O(n²) in sequence length because every token compares with every other token.
Scaled dot-product attention scores how much each value V should contribute to a query by taking dot products with keys K, scaling by \(\sqrt{d_k}\), applying softmax, and forming a weighted sum.