Mixture-of-Experts (MoE) language models don’t split cleanly into domain specialists; instead, a small, stable group of experts gets chosen again and again across many subjects.
InfiniDepth is a new way to predict depth that treats every image location as a smooth, continuous place you can ask for depth, not just the fixed pixels of a grid.
LTX-2 is an open-source model that makes video and sound together from a text prompt, so the picture and audio match in time and meaning.
Unified Thinker separates “thinking” (planning) from “drawing” (image generation) so complex instructions get turned into clear, doable steps before any pixels are painted.
This paper introduces SOP, a system that lets many real robots learn new skills online at the same time while keeping one shared brain (policy).
MMFormalizer is a new system that turns problems with pictures and words (like physics scenes or geometry diagrams) into strict, checkable math statements and proofs.
The authors built a simple six-agent system to see if today’s AI models could plan, run, and write a research paper mostly on their own.
Large language models (LLMs) are good at many math problems but often mess up simple counting when the list gets long.
DreamStyle is a single video-stylizing model that can follow text, copy a style image, or continue from a stylized first frame—without switching tools.
MiMo-V2-Flash is a giant but efficient language model that uses a team-of-experts design to think well while staying fast.
AnyDepth is a new, simple way for a computer to tell how far things are in a picture using just one image (monocular depth).
SimpleMem is a new memory system that helps AI remember long conversations without wasting space or tokens.