The paper introduces CHAIN, a hands-on 3D playground that tests if AI can not only see objects but also plan and act under real physics.
Panini is a way for AI to keep learning new facts without changing its brain by storing them as tiny linked Q&A facts in an external memory.
Big AI reasoning models often keep thinking long after they already found the right answer, wasting time and tokens.
Parallel-Probe is a simple add-on that lets many AI “thought paths” think at once but stop early when they already agree.
LatentMem is a new memory system that helps teams of AI agents remember the right things for their specific jobs without overloading them with text.
The paper tests a simple but bold idea: show code to AI as pictures instead of plain text, then shrink those pictures to save tokens and time.
The paper shows how to make AI think faster and smarter by planning in a hidden space instead of writing long step-by-step sentences.
MemOCR is a new way for AI to remember long histories by turning important notes into a picture with big, bold parts for key facts and tiny parts for details.
When training smart language models with RL that use right-or-wrong rewards, learning can stall on 'saturated' problems that the model almost always solves.
SPARK is a new way to train AI agents that saves compute by exploring more only at the most important moments.
Innovator-VL is a new multimodal AI model that understands both pictures and text to help solve science problems without needing mountains of special data.
This paper teaches code AIs to work more like real software engineers by training them in the middle of their learning using real development workflows.