The paper introduces CoPE, a simple change to how models track word positions that makes long documents much easier for them to understand.
This paper shows how to turn a big Transformer model into a faster hybrid model that mixes attention and RNN layers using far less training data (about 2.3B tokens).