๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts95

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediate
๐Ÿ“šTheoryIntermediate

Bellman Equations

Bellman equations express how the value of a state or action equals immediate reward plus discounted value of what follows.

#bellman equation#value iteration#policy iteration+12
๐Ÿ“šTheoryIntermediate

Attention Mechanism Theory

Attention computes a weighted sum of values V where the weights come from how similar queries Q are to keys K.

#attention
45678
Advanced
#self-attention
#multi-head attention
+12
๐Ÿ“šTheoryIntermediate

Scaling Laws

Scaling laws say that model loss typically follows a power law that improves predictably as you increase parameters, data, or compute.

#scaling laws#power law#chinchilla scaling+12
๐Ÿ“šTheoryIntermediate

Universal Approximation Theorem

The Universal Approximation Theorem (UAT) says a feedforward neural network with one hidden layer and a non-polynomial activation (like sigmoid or ReLU) can approximate any continuous function on a compact set as closely as we want.

#universal approximation theorem#cybenko#hornik+12
๐Ÿ“šTheoryIntermediate

Minimax Theorem

The Minimax Theorem states that in zero-sum two-player games with suitable convexity and compactness, the best guaranteed payoff for the maximizer equals the worst-case loss for the minimizer.

#minimax theorem#zero-sum games#saddle point+12
๐Ÿ“šTheoryIntermediate

PAC Learning

PAC learning formalizes when a learner can probably (with probability at least 1โˆ’ฮด) and approximately (error at most ฮต) succeed using a polynomial number of samples.

#pac learning#agnostic learning#vc dimension+12
๐Ÿ“šTheoryIntermediate

Bias-Variance Tradeoff

The biasโ€“variance tradeoff explains how prediction error splits into bias squared, variance, and irreducible noise for squared loss.

#bias variance tradeoff#mse decomposition#polynomial regression+12
๐Ÿ“šTheoryIntermediate

Game Theory

Game theory studies strategic decision-making among multiple players where each playerโ€™s payoff depends on everyoneโ€™s actions.

#game theory#nash equilibrium#mixed strategies+11
๐Ÿ“šTheoryIntermediate

Lagrangian Duality

Lagrangian duality turns a constrained minimization problem into a related maximization problem that provides lower bounds on the original objective.

#lagrangian duality#kkt conditions#slater condition+11
๐Ÿ“šTheoryIntermediate

Matrix Calculus

Matrix calculus extends ordinary calculus to functions whose inputs and outputs are vectors and matrices, letting you compute gradients, Jacobians, and Hessians systematically.

#matrix calculus#gradient#jacobian+12
๐Ÿ“šTheoryIntermediate

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) factors any mร—n matrix A into A = UฮฃV^{T}, where U and V are orthogonal and ฮฃ is diagonal with nonnegative entries.

#singular value decomposition#svd#truncated svd+12
๐Ÿ“šTheoryIntermediate

Convex Optimization

Convex optimization studies minimizing convex functions over convex sets, where every local minimum is guaranteed to be a global minimum.

#convex optimization#convex function#convex set+12