KARL is a smart search helper that learns to look up information step by step and explain answers using the facts it finds.
Multi-agent systems are like teams of smart helpers, but one bad message can mislead the whole team.
SAM 3D Body (3DB) is a model that turns a single photo of a person into a full 3D body, feet, and hands mesh with state-of-the-art accuracy.
The paper asks a simple question: which kind of step-by-step reasoning helps small language models learn best, and why?
AgentArk teaches one language model to think like a whole team of models that debate, so it can solve tough problems quickly without running a long, expensive debate at answer time.
LatentMem is a new memory system that helps teams of AI agents remember the right things for their specific jobs without overloading them with text.
RLAnything is a new reinforcement learning (RL) framework that trains three things together at once: the policy (the agent), the reward model (the judge), and the environment (the tasks).
LingBot-VLA is a robot brain that listens to language, looks at the world, and decides smooth actions to get tasks done.
The paper shows a new way to teach AI assistants how to use tools in many-step conversations by mining ordinary text on the internet for step-by-step “how-to” knowledge.
JudgeRLVR teaches a model to be a strict judge of answers before it learns to generate them, which trims bad ideas early.
GR-Dexter is a full package—new robot hands, a smart AI brain, and lots of carefully mixed data—that lets a two-handed robot follow language instructions to do long, tricky tasks.
UniUGP is a single system that learns to understand road scenes, explain its thinking, plan safe paths, and even imagine future video frames.