OmniSIFT is a new way to shrink (compress) audio and video tokens so omni-modal language models can think faster without forgetting important details.
Large Vision-Language Models (LVLMs) are great with one picture but get confused when you give them several, often mixing details from different images.