๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐ŸงฉProblems๐ŸŽฏPrompts๐Ÿง Review
Search
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers2

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#transferability

Sparse Reward Subsystem in Large Language Models

Intermediate
Guowei Xu, Mert Yuksekgonul et al.Feb 1arXiv

The paper discovers a tiny, special group of neurons inside large language models (LLMs) that act like a reward system in the human brain.

#value neurons#dopamine neurons#reward prediction error

Few Tokens Matter: Entropy Guided Attacks on Vision-Language Models

Intermediate
Mengqi He, Xinyu Tian et al.Dec 26arXiv

The paper shows that when vision-language models write captions, only a small set of uncertain words (about 20%) act like forks that steer the whole sentence.

#vision-language models#autoregressive generation#entropy