LMEB is a new test that checks whether text-embedding models can remember and find information across long stretches of time, not just in short, neat passages.
RoboPocket turns an ordinary smartphone into a pocket robot coach that helps you fix robot mistakes instantly—without touching a robot.
RealWonder is a system that turns a single picture and 3D physical actions (like pushes, wind, and robot gripper moves) into a realistic video in real time.
Reasoning models often talk too much, and those extra words can actually make them more wrong.
Robots need many different ways to grab things, just like people use pinch, tripod, whole-hand, or two hands together.
KARL is a smart search helper that learns to look up information step by step and explain answers using the facts it finds.
Vision Transformers (ViTs) are great at recognizing what is in a whole image but often blur the tiny details needed to label each pixel (segmentation).
Multimodal AI models handle text, images, and audio, but their signals are very different in size, which breaks standard low‑bit compression methods.
Timer-S1 is a huge time-series model (8.3B parameters, only 0.75B used per step) that predicts the future by thinking step-by-step inside one forward pass.
DARE is a new way for AI assistants to find the right R functions by also looking at what the data looks like, not just the words in the question.
This paper says we should test AI the way real life works: by letting it ask questions, gather clues, and make smart moves step by step under a limited budget.
Helios is a 14-billion-parameter video model that can make minute-long videos in real time at about 19.5 frames per second on a single NVIDIA H100 GPU.