๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts7

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediate
โˆ‘MathIntermediate

Pseudoinverse (Moore-Penrose)

The Mooreโ€“Penrose pseudoinverse generalizes matrix inversion to rectangular or singular matrices and is denoted Aโบ.

#pseudoinverse#moore-penrose#least squares+12
๐Ÿ“šTheoryIntermediate

Implicit Bias of Gradient Descent

In underdetermined linear systems (more variables than equations), gradient descent started at zero converges to the minimum Euclidean norm solution without any explicit regularizer.

#implicit bias
Advanced
Filtering by:
#least squares
#gradient descent
#minimum norm
+12
โˆ‘MathAdvanced

Hilbert Spaces

A Hilbert space is an inner product space that is complete, meaning Cauchy sequences converge to points inside the space.

#hilbert space#inner product#l2 space+12
โš™๏ธAlgorithmIntermediate

Matrix Factorizations (Numerical)

Matrix factorizations rewrite a matrix into simpler building blocks (triangular or orthogonal) that make solving and analyzing linear systems much easier.

#lu decomposition#qr factorization#householder reflections+12
โˆ‘MathIntermediate

Convex Optimization Problems

A convex optimization problem minimizes a convex function over a convex set, guaranteeing that every local minimum is a global minimum.

#convex optimization#gradient descent#projected gradient+12
โˆ‘MathIntermediate

Matrix Calculus Fundamentals

Matrix calculus extends single-variable derivatives to matrices so we can differentiate functions built from matrix multiplications, traces, and norms.

#matrix calculus#frobenius norm#trace trick+12
๐Ÿ“šTheoryIntermediate

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) factors any mร—n matrix A into A = UฮฃV^{T}, where U and V are orthogonal and ฮฃ is diagonal with nonnegative entries.

#singular value decomposition#svd#truncated svd+12