๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts46

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediate
โš™๏ธAlgorithmAdvanced

Wake-Sleep Algorithm

The Wakeโ€“Sleep algorithm trains a pair of models: a generative model that explains how data are produced and a recognition model that guesses hidden causes from observed data.

#wake-sleep#helmholtz machine#generative model+12
โš™๏ธAlgorithmAdvanced

Stochastic Variational Inference

Stochastic Variational Inference (SVI) scales variational inference to large datasets by taking noisy but unbiased gradient steps using minibatches.

#stochastic variational inference
1234
Advanced
#elbo
#variational inference
+12
โš™๏ธAlgorithmAdvanced

Langevin Dynamics & Score-Based Sampling

Langevin dynamics is a noisy gradient-ascent method that moves particles toward high probability regions while adding Gaussian noise to ensure proper exploration.

#langevin dynamics#mala#ula+12
โš™๏ธAlgorithmAdvanced

Hamiltonian Monte Carlo (HMC)

Hamiltonian Monte Carlo (HMC) uses gradients of the log-density to propose long-distance moves that still land in high-probability regions.

#hamiltonian monte carlo#hmc#mcmc+11
โš™๏ธAlgorithmAdvanced

Natural Gradient Method

Natural gradient scales the ordinary gradient by the inverse Fisher information matrix to account for the geometry of probability distributions.

#natural gradient#fisher information#empirical fisher+12
โš™๏ธAlgorithmAdvanced

Interior Point Methods

Interior point methods solve constrained optimization by replacing hard constraints with a smooth barrier that becomes infinite at the boundary, keeping iterates strictly inside the feasible region.

#interior point method#logarithmic barrier#central path+12
โš™๏ธAlgorithmAdvanced

ADMM (Alternating Direction Method of Multipliers)

ADMM splits a hard optimization problem into two easier subproblems that communicate through simple averaging-like steps.

#admm#alternating direction method of multipliers#augmented lagrangian+11
โš™๏ธAlgorithmAdvanced

Newton's Method & Second-Order Optimization

Newton's method uses both the gradient and the Hessian to take steps that aim directly at the local optimum by fitting a quadratic model of the loss around the current point.

#newton's method#second-order optimization#hessian+12
โš™๏ธAlgorithmAdvanced

3D Geometry Basics

3D geometry relies on a small toolkit: vectors, dot products, cross products, and planes; mastering these unlocks most 3D problem-solving.

#3d geometry#dot product#cross product+12
โš™๏ธAlgorithmAdvanced

Directed MST (Edmonds/Chu-Liu Algorithm)

A directed minimum spanning arborescence (MSA) is a minimum-cost set of edges that makes every vertex reachable from a chosen root with exactly one incoming edge per non-root vertex.

#directed mst#edmonds algorithm#chu-liu+11
โš™๏ธAlgorithmAdvanced

DP on Broken Profile - Plug DP

Plug DP (DP on broken profile with plugs) sweeps a grid cell by cell while remembering how partial path segments cross the frontier as labeled โ€œplugs.โ€

#plug dp#broken profile#hamiltonian path+12
โš™๏ธAlgorithmAdvanced

Matrix Exponentiation - Advanced

Matrix exponentiation turns repeated linear transitions into fast O(n^{3} log k) computation using exponentiation by squaring.

#matrix exponentiation#adjacency matrix#walk counting+12