πŸŽ“How I Study AIHISA
πŸ“–Read
πŸ“„PapersπŸ“°Blogs🎬Courses
πŸ’‘Learn
πŸ›€οΈPathsπŸ“šTopicsπŸ’‘Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way
Back to Concepts
πŸ“Š

Statistics for ML

Maximum likelihood, Bayesian inference, and hypothesis testing for building and evaluating ML models.

9 concepts

Intermediate9

βˆ‘MathIntermediate

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) chooses parameters that make the observed data most probable under a chosen model.

#maximum likelihood#log-likelihood#bernoulli mle+12
βˆ‘MathIntermediate

Maximum A Posteriori (MAP) Estimation

Maximum A Posteriori (MAP) estimation chooses the parameter value with the highest posterior probability after seeing data.

#map estimation#posterior mode#bayesian inference+12
πŸ“šTheoryIntermediate

Bayesian Inference

Bayesian inference updates prior beliefs with observed data to produce a posterior distribution P(\theta\mid D).

#bayesian inference#posterior#prior+12
πŸ“šTheoryIntermediate

Bias-Variance Tradeoff

The bias–variance tradeoff explains how prediction error splits into bias squared, variance, and irreducible noise for squared loss.

#bias variance tradeoff#mse decomposition#polynomial regression+12
βˆ‘MathIntermediate

Hypothesis Testing

Hypothesis testing is a decision-making process to evaluate claims about a population using sample data.

#hypothesis testing#null hypothesis#alternative hypothesis+12
βš™οΈAlgorithmIntermediate

Bootstrap & Resampling Methods

Bootstrap is a resampling method that estimates uncertainty by repeatedly sampling with replacement from the observed data.

#bootstrap#resampling#confidence intervals+12
βˆ‘MathIntermediate

Sufficient Statistics

A sufficient statistic compresses all information in the sample about a parameter into a lower-dimensional summary without losing inferential power.

#sufficient statistic#fisher neyman factorization#exponential family+12
πŸ“šTheoryIntermediate

Empirical Risk Minimization

Empirical Risk Minimization (ERM) chooses a model that minimizes the average loss on the training data.

#empirical risk minimization#expected risk#loss function+12
βˆ‘MathIntermediate

Confidence Intervals & Prediction Intervals

A confidence interval estimates a fixed but unknown parameter (like a population mean) with a range that would capture the true value in a long run of repeated samples.

#confidence interval#prediction interval#t distribution+12