๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts532

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediateAdvanced
โˆ‘MathIntermediate

Numerical Stability

Numerical stability measures how much rounding and tiny input changes can distort an algorithmโ€™s output on real computers using floating-point arithmetic.

#numerical stability#forward error#backward error+12
โˆ‘MathIntermediate

Floating Point Arithmetic

Floating-point numbers approximate real numbers using a fixed number of bits following the IEEE 754 standard.

#ieee 754
1213141516
#floating point
#machine epsilon
+10
โš™๏ธAlgorithmAdvanced

Interior Point Methods

Interior point methods solve constrained optimization by replacing hard constraints with a smooth barrier that becomes infinite at the boundary, keeping iterates strictly inside the feasible region.

#interior point method#logarithmic barrier#central path+12
โš™๏ธAlgorithmAdvanced

ADMM (Alternating Direction Method of Multipliers)

ADMM splits a hard optimization problem into two easier subproblems that communicate through simple averaging-like steps.

#admm#alternating direction method of multipliers#augmented lagrangian+11
โš™๏ธAlgorithmIntermediate

Proximal Operators & Methods

A proximal operator pulls a point x toward minimizing a function f while penalizing how far it moves, acting like a denoiser or projector depending on f.

#proximal operator#ista#fista+12
โˆ‘MathAdvanced

KKT Conditions

KKT conditions generalize Lagrange multipliers to handle inequality constraints in constrained optimization problems.

#kkt conditions#lagrangian#complementary slackness+12
โˆ‘MathIntermediate

Convex Optimization Problems

A convex optimization problem minimizes a convex function over a convex set, guaranteeing that every local minimum is a global minimum.

#convex optimization#gradient descent#projected gradient+12
โˆ‘MathIntermediate

Convex Sets & Functions

A set is convex if every line segment between any two of its points lies entirely inside the set.

#convex set#convex function#convex hull+11
๐Ÿ“šTheoryAdvanced

Maximum Entropy Principle

The Maximum Entropy Principle picks the probability distribution with the greatest uncertainty (entropy) that still satisfies the facts you know (constraints).

#maximum entropy principle#jaynes#exponential family+12
๐Ÿ“šTheoryAdvanced

Rate-Distortion Theory

Rateโ€“distortion theory tells you the minimum number of bits per symbol needed to represent data while keeping average distortion below a target D.

#rate-distortion#mutual information#blahut-arimoto+12
๐Ÿ“šTheoryAdvanced

Information Bottleneck

The Information Bottleneck (IB) principle formalizes the tradeoff between compressing an input X and preserving information about a target Y using the objective min_{p(t|x)} I(X;T) - \beta I(T;Y).

#information bottleneck#mutual information#kl divergence+12
๐Ÿ“šTheoryIntermediate

Cross-Entropy

Cross-entropy measures how well a proposed distribution Q predicts outcomes actually generated by a true distribution P.

#cross-entropy#entropy#kl divergence+12