๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers2

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#Forgetting Transformer

2Mamba2Furious: Linear in Complexity, Competitive in Accuracy

Intermediate
Gabriel Mongaras, Eric C. LarsonFeb 19arXiv

The paper studies Mamba-2 (a fast, linear-time attention method) and pares it down to the pieces that truly boost accuracy.

#linear attention#Mamba-2#2Mamba

Group Representational Position Encoding

Intermediate
Yifan Zhang, Zixiang Chen et al.Dec 8arXiv

GRAPE is a new way to tell Transformers where each word is in a sentence by using neat math moves called group actions.

#GRAPE#positional encoding#group actions