🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers17

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#scaling laws

Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space

Intermediate
Xingwei Qu, Shaowen Wang et al.Dec 31arXiv

Language is lumpy: easy stretches and tricky jumps are mixed together, but old models spend the same effort on every word.

#Dynamic Large Concept Models#semantic boundaries#latent reasoning

From Word to World: Can Large Language Models be Implicit Text-based World Models?

Intermediate
Yixia Li, Hongru Wang et al.Dec 21arXiv

This paper asks if large language models (LLMs) can act like "world models" that predict what happens next in text-based environments, not just the next word in a sentence.

#world models#next-state prediction#text-based environments

Towards Scalable Pre-training of Visual Tokenizers for Generation

Intermediate
Jingfeng Yao, Yuda Song et al.Dec 15arXiv

The paper tackles a paradox: visual tokenizers that get great pixel reconstructions often make worse images when used for generation.

#visual tokenizer#latent space#Vision Transformer

Scaling Laws for Code: Every Programming Language Matters

Intermediate
Jian Yang, Shawn Guo et al.Dec 15arXiv

Different programming languages scale differently when training code AI models, so treating them all the same wastes compute and lowers performance.

#multilingual code pre-training#scaling laws#language-specific scaling

Scaling Behavior of Discrete Diffusion Language Models

Intermediate
Dimitri von Rütte, Janis Fluri et al.Dec 11arXiv

This paper studies how a newer kind of language model, called a discrete diffusion language model (DLM), gets better as we give it more data, bigger models, and more compute.

#discrete diffusion#language models#scaling laws
12