๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts95

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediate
๐Ÿ“šTheoryIntermediate

Focal Loss

Focal Loss reshapes cross-entropy so that hard, misclassified examples get more focus while easy, well-classified ones are down-weighted.

#focal loss#class imbalance#cross-entropy+11
๐Ÿ“šTheoryIntermediate

Mean Field Variational Family

Mean field variational family assumes the joint posterior over latent variables factorizes into independent pieces q(z) = โˆ q_i(z_i).

#mean field
12345
Advanced
#variational inference
#elbo
+11
๐Ÿ“šTheoryIntermediate

Variational Inference

Variational Inference (VI) turns Bayesian inference into an optimization problem by choosing a simple family q(z) to approximate an intractable posterior p(z|x).

#variational inference#elbo#kl divergence+12
๐Ÿ“šTheoryIntermediate

RLHF Mathematics

RLHF turns human preferences between two model outputs into training signals using a probabilistic model of choice.

#rlhf#bradley-terry#pairwise comparisons+11
๐Ÿ“šTheoryIntermediate

Exploration-Exploitation Tradeoff

The explorationโ€“exploitation tradeoff is the tension between trying new actions to learn (exploration) and using the best-known action to earn rewards now (exploitation).

#multi-armed bandit#exploration exploitation#ucb1+12
๐Ÿ“šTheoryIntermediate

Value Function Approximation

Value function approximation replaces a huge table of values with a small set of parameters that can generalize across similar states.

#reinforcement learning#value function approximation#linear function approximator+12
๐Ÿ“šTheoryIntermediate

Metric Learning

Metric learning is about automatically learning a distance function so that similar items are close and dissimilar items are far in a feature space.

#metric learning#mahalanobis distance#contrastive loss+12
๐Ÿ“šTheoryIntermediate

Self-Supervised Learning Theory

Self-supervised learning (SSL) teaches models to learn useful representations from unlabeled data by solving proxy tasks created directly from the data.

#self-supervised learning#contrastive learning#infonce+12
๐Ÿ“šTheoryIntermediate

Contrastive Learning

Contrastive learning teaches models by pulling together similar examples (positives) and pushing apart dissimilar ones (negatives).

#contrastive learning#infonce#nt-xent+12
๐Ÿ“šTheoryIntermediate

Embedding Spaces & Distributed Representations

Embedding spaces map discrete things like words or products to dense vectors so that similar items are close together.

#embeddings#dense vectors#cosine similarity+12
๐Ÿ“šTheoryIntermediate

Classifier-Free Guidance

Classifier-Free Guidance (CFG) steers diffusion sampling toward a condition (like a text prompt) without needing a separate classifier.

#classifier-free guidance#diffusion models#epsilon prediction+11
๐Ÿ“šTheoryIntermediate

Flow Matching

Flow matching learns a time-dependent vector field v_t(x, c) whose ODE transports simple noise to complex data, enabling fast, deterministic sampling.

#flow matching#conditional flow matching#rectified flow+11