๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts18

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediate
โˆ‘MathIntermediate

Law of Large Numbers

The Weak Law of Large Numbers (WLLN) says that the sample average of independent, identically distributed (i.i.d.) random variables with finite mean gets close to the true mean with high probability as the sample size grows.

#law of large numbers#weak law#sample mean+12
โˆ‘MathIntermediate

Discount Factor & Return

The discounted return G_t sums all future rewards but down-weights distant rewards by powers of a discount factor ฮณ.

#discount factor
12
Advanced
Filtering by:
#monte carlo
#discounted return
#reinforcement learning
+12
โš™๏ธAlgorithmIntermediate

Stratified & Latin Hypercube Sampling

Stratified sampling reduces Monte Carlo variance by dividing the domain into non-overlapping regions (strata) and sampling within each region.

#stratified sampling#latin hypercube sampling#variance reduction+11
โš™๏ธAlgorithmIntermediate

Rejection Sampling

Rejection sampling draws from a hard target distribution by using an easier proposal and accepting with probability p(x)/(M q(x)).

#rejection sampling#accept-reject#proposal distribution+11
โš™๏ธAlgorithmIntermediate

Importance Sampling

Importance sampling rewrites an expectation under a hard-to-sample distribution p as an expectation under an easier distribution q, multiplied by a weight w = p/q.

#importance sampling#proposal distribution#self-normalized+12
โš™๏ธAlgorithmIntermediate

Monte Carlo Estimation

Monte Carlo estimation approximates an expected value by averaging function values at random samples drawn from a probability distribution.

#monte carlo#expectation#variance reduction+12
โš™๏ธAlgorithmIntermediate

Bootstrap & Resampling Methods

Bootstrap is a resampling method that estimates uncertainty by repeatedly sampling with replacement from the observed data.

#bootstrap#resampling#confidence intervals+12
๐Ÿ“šTheoryIntermediate

Bayesian Inference

Bayesian inference updates prior beliefs with observed data to produce a posterior distribution P(\theta\mid D).

#bayesian inference#posterior#prior+12
โˆ‘MathIntermediate

Expectation, Variance & Moments

Expectation is the long-run average value of a random variable and acts like the balance point of its distribution.

#expectation#variance#moments+12
โˆ‘MathIntermediate

Random Variables & Distributions

A random variable maps uncertain outcomes to numbers and is described by a distribution that assigns likelihoods to values or ranges.

#random variable#pmf#pdf+12
โˆ‘MathIntermediate

Probability Axioms & Rules

Kolmogorovโ€™s axioms define probability as a measure on events: non-negativity, normalization, and countable additivity.

#kolmogorov axioms#probability measure#sample space+12
๐Ÿ“šTheoryIntermediate

Randomized Algorithm Theory

Randomized algorithms use random bits to make choices that simplify design, avoid worst cases, and often speed up computation.

#randomized algorithms#las vegas#monte carlo+12