OpenDataArena (ODA) is a fair, open platform that measures how valuable different post‑training datasets are for large language models by holding everything else constant.
Reasoning tokens (the words a model writes before its final answer) help the model think better, but they are not a trustworthy diary of how it really thought.
DentalGPT is a special AI that looks at dental images and text together and explains what it sees like a junior dentist.
The paper shows that video AIs do not need long, human-like chains of thought to reason well.
VG-Refiner is a new way for AI to find the right object in a picture when given a description, even if helper tools make mistakes.
Large language models forget or misuse new facts if you only poke their weights once; EtCon fixes this with a two-step plan.
ThreadWeaver teaches a language model to split big problems into smaller parts it can solve at the same time, like teammates working in parallel.