SPARK is a new way to train AI agents that saves compute by exploring more only at the most important moments.
VERGE is a teamwork system where an AI writer (an LLM) works with a strict math checker (an SMT solver) to make answers both smart and logically sound.
This paper shows a simple way for AI models to keep learning new things without forgetting what they already know.
Big AI models used to get better by getting wider or reading longer texts, but those tricks are slowing down.
The paper argues that making and using pictures inside an AI’s thinking can help it reason more like humans, especially for real-world, physical and spatial problems.
Youtu-VL is a new kind of vision-language model that learns to predict both words and tiny image pieces, not just words.
AACR-Bench is a new test set that checks how well AI can do code reviews using the whole project, not just one file.
Selective Steering is a new way to gently nudge a language model’s inner thoughts without breaking its flow or skills.
Large language model (LLM) post-training has uneven work per GPU because some text sequences are much longer than others.
Innovator-VL is a new multimodal AI model that understands both pictures and text to help solve science problems without needing mountains of special data.
LLMs are usually trained by treating every question the same and giving each one the same number of tries, which wastes compute on easy problems and neglects hard ones.
SimpleSeg teaches a multimodal language model to outline objects by writing down a list of points, like connecting the dots, instead of using a special segmentation decoder.