InfiniDepth is a new way to predict depth that treats every image location as a smooth, continuous place you can ask for depth, not just the fixed pixels of a grid.
LTX-2 is an open-source model that makes video and sound together from a text prompt, so the picture and audio match in time and meaning.
This paper fixes a common problem in multimodal AI: models can understand pictures and words well but stumble when asked to create matching images.
Unified Thinker separates “thinking” (planning) from “drawing” (image generation) so complex instructions get turned into clear, doable steps before any pixels are painted.
This paper shows that training a language model with reinforcement learning on just one super well-designed example can boost reasoning across many school subjects, not just math.
This paper introduces SOP, a system that lets many real robots learn new skills online at the same time while keeping one shared brain (policy).
MMFormalizer is a new system that turns problems with pictures and words (like physics scenes or geometry diagrams) into strict, checkable math statements and proofs.
The authors built a simple six-agent system to see if today’s AI models could plan, run, and write a research paper mostly on their own.
Large reasoning models can often find the right math answer in their “head” before finishing their written steps, but this works best in languages with lots of training data like English and Chinese.
Large language models (LLMs) are good at many math problems but often mess up simple counting when the list gets long.
DreamStyle is a single video-stylizing model that can follow text, copy a style image, or continue from a stylized first frame—without switching tools.
MiMo-V2-Flash is a giant but efficient language model that uses a team-of-experts design to think well while staying fast.