Millions of public AI models exist, but downloads are concentrated on a tiny set of βofficialβ checkpoints, which are not always the best performers.
This paper asks if large language models (LLMs) can act like "world models" that predict what happens next in text-based environments, not just the next word in a sentence.