๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers3

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#Self-Attention

MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head

Intermediate
Kewei Zhang, Ye Huang et al.Jan 12arXiv

Transformers are powerful but slow because regular self-attention compares every token with every other token, which grows too fast for long sequences.

#Multi-Head Linear Attention#Linear Attention#Self-Attention

Recurrent Neural Networks (RNNs): A gentle Introduction and Overview

Beginner
Robin M. SchmidtNov 23arXiv

Recurrent Neural Networks (RNNs) are special neural networks that learn from sequences, like sentences or time series, by remembering what came before.

#Recurrent Neural Network#Backpropagation Through Time#Truncated BPTT

Attention Is All You Need

Intermediate
Ashish Vaswani, Noam Shazeer et al.Jun 12arXiv

The paper introduces the Transformer, a model that understands and generates sequences (like sentences) using only attention, without RNNs or CNNs.

#Transformer#Self-Attention#Multi-Head Attention