๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts57

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediate
๐Ÿ“šTheoryAdvanced

Weisfeiler-Leman Hierarchy

The Weisfeilerโ€“Leman (WL) hierarchy is a family of color-refinement procedures that iteratively color vertices (or k-tuples of vertices) to capture graph structure for isomorphism testing.

#weisfeiler-leman#color refinement#graph isomorphism+12
๐Ÿ“šTheoryAdvanced

E(n) Equivariant Neural Networks

E(n)-equivariant neural networks are models whose outputs transform predictably when inputs are rotated, translated, or reflected in n-dimensional Euclidean space.

#e(n)-equivariance
12345
Advanced
#euclidean group
#so(n) and o(n)
+12
๐Ÿ“šTheoryAdvanced

Gauge Equivariant Networks

Gauge equivariant networks are neural networks that respect local symmetries (gauges) on manifolds, such as how vectors rotate when you change the local reference frame on a surface.

#gauge equivariant networks#geometric deep learning#manifold learning+12
๐Ÿ“šTheoryAdvanced

Mamba & Selective State Spaces

Mamba uses a state-space model whose parameters are selected (gated) by the current input token, letting the model adapt its memory dynamics at each step.

#mamba#selective state space#ssm+12
๐Ÿ“šTheoryAdvanced

CTC Loss (Connectionist Temporal Classification)

CTC loss trains sequence models when you do not know the alignment between inputs (frames) and outputs (labels).

#ctc loss#connectionist temporal classification#forward backward+12
๐Ÿ“šTheoryAdvanced

Variational Dropout & Bayesian Deep Learning

Dropout can be interpreted as variational inference in a Bayesian neural network, where applying random masks approximates sampling from a posterior over weights.

#bayesian neural networks#variational inference#dropout+12
๐Ÿ“šTheoryAdvanced

Normalizing Flow Variational Inference

Normalizing-flow variational inference enriches the variational family by transforming a simple base distribution through a sequence of invertible, differentiable mappings.

#normalizing flows#variational inference#elbo+12
๐Ÿ“šTheoryAdvanced

Manifold Learning

Manifold learning assumes high-dimensional data actually lies near a much lower-dimensional, smoothly curved surface embedded in a higher-dimensional space.

#manifold learning#isomap#locally linear embedding+12
๐Ÿ“šTheoryAdvanced

Neural Collapse

Neural Collapse describes what happens at the end of training: the penultimate-layer features of each class concentrate tightly around a class mean.

#neural collapse#simplex etf#equiangular tight frame+12
๐Ÿ“šTheoryAdvanced

Transfer Learning Theory

Transfer learning theory studies when and why a model trained on a source distribution will work on a different target distribution.

#transfer learning#domain adaptation#hฮ”h-divergence+12
๐Ÿ“šTheoryAdvanced

Disentangled Representations

Disentangled representations aim to encode independent factors of variation (like shape, size, or color) into separate coordinates of a latent vector.

#disentangled representations#independent factors#total correlation+12
๐Ÿ“šTheoryAdvanced

Energy-Based Models (EBM)

Energy-Based Models (EBMs) define probabilities through an energy landscape: low energy means high probability, with p(x) = exp(-E(x)) / Z.

#energy-based models#partition function#langevin dynamics+12