๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
โฑ๏ธCoach๐ŸงฉProblems๐Ÿง Thinking๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts9

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediate
โˆ‘MathIntermediate

Confidence Intervals & Prediction Intervals

A confidence interval estimates a fixed but unknown parameter (like a population mean) with a range that would capture the true value in a long run of repeated samples.

#confidence interval#prediction interval#t distribution+12
๐Ÿ“šTheoryIntermediate

Empirical Risk Minimization

Empirical Risk Minimization (ERM) chooses a model that minimizes the average loss on the training data.

#empirical risk minimization
Advanced
Group:
Statistics for ML
#expected risk
#loss function
+12
โˆ‘MathIntermediate

Sufficient Statistics

A sufficient statistic compresses all information in the sample about a parameter into a lower-dimensional summary without losing inferential power.

#sufficient statistic#fisher neyman factorization#exponential family+12
โš™๏ธAlgorithmIntermediate

Bootstrap & Resampling Methods

Bootstrap is a resampling method that estimates uncertainty by repeatedly sampling with replacement from the observed data.

#bootstrap#resampling#confidence intervals+12
โˆ‘MathIntermediate

Hypothesis Testing

Hypothesis testing is a decision-making process to evaluate claims about a population using sample data.

#hypothesis testing#null hypothesis#alternative hypothesis+12
๐Ÿ“šTheoryIntermediate

Bayesian Inference

Bayesian inference updates prior beliefs with observed data to produce a posterior distribution P(\theta\mid D).

#bayesian inference#posterior#prior+12
โˆ‘MathIntermediate

Maximum A Posteriori (MAP) Estimation

Maximum A Posteriori (MAP) estimation chooses the parameter value with the highest posterior probability after seeing data.

#map estimation#posterior mode#bayesian inference+12
โˆ‘MathIntermediate

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) chooses parameters that make the observed data most probable under a chosen model.

#maximum likelihood#log-likelihood#bernoulli mle+12
๐Ÿ“šTheoryIntermediate

Bias-Variance Tradeoff

The biasโ€“variance tradeoff explains how prediction error splits into bias squared, variance, and irreducible noise for squared loss.

#bias variance tradeoff#mse decomposition#polynomial regression+12