RubricBench is a new benchmark that checks whether AI judges can use clear, checklist-style rules (rubrics) the way humans do.
Deep research agents write long reports, but old tests often judge only how smooth they sound and whether they add links, not whether the facts are true today or the logic really holds.
AIRS-Bench is a new test suite that checks whether AI research agents can do real machine learning research from start to finish, not just answer questions.
This paper builds an AI team that can make real full‑stack websites (frontend, backend, and database) from plain English instructions.
This paper builds a smart team of AI helpers, called MEnvAgent, that automatically sets up the right computer environments for code projects in many languages.
CAR-bench is a new 'driving test' for AI assistants that checks if they can stay careful, honest, and consistent during real back-and-forth conversations in a car.
LingBot-VLA is a robot brain that listens to language, looks at the world, and decides smooth actions to get tasks done.
This paper builds a fair, big playground (a benchmark) to test many EEG foundation models side-by-side on the same rules.
This paper fixes a hidden flaw in a popular image tokenizer (FSQ) with a simple one-line change to its activation function.
FutureOmni is the first benchmark that tests if multimodal AI models can predict what happens next from both sound and video, not just explain what already happened.
ToolPRMBench is a new benchmark that checks, step by step, whether an AI agent using tools picks the right next action.
The paper shows that language models with a search tool often look up too much information, which wastes compute and can make answers worse on unanswerable questions.