Large language model (LLM) post-training has uneven work per GPU because some text sequences are much longer than others.
Innovator-VL is a new multimodal AI model that understands both pictures and text to help solve science problems without needing mountains of special data.
LLMs are usually trained by treating every question the same and giving each one the same number of tries, which wastes compute on easy problems and neglects hard ones.
SimpleSeg teaches a multimodal language model to outline objects by writing down a list of points, like connecting the dots, instead of using a special segmentation decoder.
This paper teaches a model to be its own teacher so it can climb out of a learning plateau on very hard math problems.
Large language models often learn one-size-fits-all preferences, but people are different, so we need personalization.
LingBot-VLA is a robot brain that listens to language, looks at the world, and decides smooth actions to get tasks done.
AdaReasoner teaches AI to pick the right visual tools, use them in the right order, and stop using them when they aren’t helping.
This paper shows how a video generator can improve its own videos during sampling, without extra training or outside checkers.
AgentDoG is a new ‘diagnostic guardrail’ that watches AI agents step-by-step and explains exactly why a risky action happened.
This paper teaches code AIs to work more like real software engineers by training them in the middle of their learning using real development workflows.
TriPlay-RL is a three-role self-play training loop (attacker, defender, evaluator) that teaches AI models to be safer with almost no manual labels.