πŸŽ“How I Study AIHISA
πŸ“–Read
πŸ“„PapersπŸ“°Blogs🎬Courses
πŸ’‘Learn
πŸ›€οΈPathsπŸ“šTopicsπŸ’‘Concepts🎴Shorts
🎯Practice
πŸ“Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers5

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#entropy

The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI Societies

Intermediate
Chenxu Wang, Chaozhuo Li et al.Feb 10arXiv

The paper shows a three-way no-win situation: an AI society cannot be closed off, keep learning forever, and stay perfectly safe for humans all at the same time.

#self-evolving AI#multi-agent systems#AI safety

No Global Plan in Chain-of-Thought: Uncover the Latent Planning Horizon of LLMs

Intermediate
Liyan Xu, Mo Yu et al.Feb 2arXiv

Large language models don’t map out a full step-by-step plan before they start thinking; they mostly plan just a little bit ahead.

#chain-of-thought#latent planning horizon#Tele-Lens

Language of Thought Shapes Output Diversity in Large Language Models

Intermediate
Shaoyang Xu, Wenxuan ZhangJan 16arXiv

The paper shows that changing the language a model 'thinks in' (its language of thought) can make its English answers more varied without making them much worse in quality.

#language of thought#output diversity#multilingual reasoning

Multiplex Thinking: Reasoning via Token-wise Branch-and-Merge

Intermediate
Yao Tang, Li Dong et al.Jan 13arXiv

The paper introduces Multiplex Thinking, a new way for AI to think by sampling several likely next words at once and blending them into a single super-token.

#Multiplex Thinking#chain-of-thought#continuous token

Few Tokens Matter: Entropy Guided Attacks on Vision-Language Models

Intermediate
Mengqi He, Xinyu Tian et al.Dec 26arXiv

The paper shows that when vision-language models write captions, only a small set of uncertain words (about 20%) act like forks that steer the whole sentence.

#vision-language models#autoregressive generation#entropy