Parallel-Probe is a simple add-on that lets many AI “thought paths” think at once but stop early when they already agree.
LatentMem is a new memory system that helps teams of AI agents remember the right things for their specific jobs without overloading them with text.
The paper tests a simple but bold idea: show code to AI as pictures instead of plain text, then shrink those pictures to save tokens and time.
The paper shows how to make AI think faster and smarter by planning in a hidden space instead of writing long step-by-step sentences.
MemOCR is a new way for AI to remember long histories by turning important notes into a picture with big, bold parts for key facts and tiny parts for details.
When training smart language models with RL that use right-or-wrong rewards, learning can stall on 'saturated' problems that the model almost always solves.
SPARK is a new way to train AI agents that saves compute by exploring more only at the most important moments.
Innovator-VL is a new multimodal AI model that understands both pictures and text to help solve science problems without needing mountains of special data.
This paper teaches code AIs to work more like real software engineers by training them in the middle of their learning using real development workflows.
AgencyBench is a giant test that checks how well AI agents can handle real, long, multi-step jobs, not just short puzzles.
The paper introduces Multiplex Thinking, a new way for AI to think by sampling several likely next words at once and blending them into a single super-token.
AgentOCR turns an agent’s long text history into pictures so it can remember more using fewer tokens.