🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts4

Groups

📐Linear Algebra15📈Calculus & Differentiation10🎯Optimization14🎲Probability Theory12📊Statistics for ML9📡Information Theory10🔺Convex Optimization7🔢Numerical Methods6🕸Graph Theory for Deep Learning6🔵Topology for ML5🌐Differential Geometry6∞Measure Theory & Functional Analysis6🎰Random Matrix Theory5🌊Fourier Analysis & Signal Processing9🎰Sampling & Monte Carlo Methods10🧠Deep Learning Theory12🛡️Regularization Theory11👁️Attention & Transformer Theory10🎨Generative Model Theory11🔮Representation Learning10🎮Reinforcement Learning Mathematics9🔄Variational Methods8📉Loss Functions & Objectives10⏱️Sequence & Temporal Models8💎Geometric Deep Learning8

Category

🔷All∑Math⚙️Algo🗂️DS📚Theory

Level

AllBeginnerIntermediate
📚TheoryAdvanced

Transformer Expressiveness

Transformer expressiveness studies what kinds of sequence-to-sequence mappings a Transformer can represent or approximate.

#transformer expressiveness#universal approximation#self-attention+12
📚TheoryIntermediate

Positional Encoding Theory

Transformers are permutation-invariant by default, so they need positional encodings to understand word order in sequences.

#positional encoding
Advanced
Filtering by:
#positional encoding
#sinusoidal encoding
#transformer
+11
📚TheoryAdvanced

Transformer Theory

Transformers map sequences to sequences using layers of self-attention and feed-forward networks wrapped with residual connections and LayerNorm.

#transformer#self-attention#positional encoding+12
📚TheoryIntermediate

Attention Mechanism Theory

Attention computes a weighted sum of values V where the weights come from how similar queries Q are to keys K.

#attention#self-attention#multi-head attention+12