This paper teaches a new way to make a language model pay extra attention to the exact words you highlight in a prompt.
The paper introduces CoPE, a simple change to how models track word positions that makes long documents much easier for them to understand.
OmniSIFT is a new way to shrink (compress) audio and video tokens so omni-modal language models can think faster without forgetting important details.
Large Vision-Language Models (LVLMs) are great with one picture but get confused when you give them several, often mixing details from different images.