Maximum likelihood, Bayesian inference, and hypothesis testing for building and evaluating ML models.
9 concepts
Maximum Likelihood Estimation (MLE) chooses parameters that make the observed data most probable under a chosen model.
Maximum A Posteriori (MAP) estimation chooses the parameter value with the highest posterior probability after seeing data.
Bayesian inference updates prior beliefs with observed data to produce a posterior distribution P(\theta\mid D).
The biasβvariance tradeoff explains how prediction error splits into bias squared, variance, and irreducible noise for squared loss.
Hypothesis testing is a decision-making process to evaluate claims about a population using sample data.
Bootstrap is a resampling method that estimates uncertainty by repeatedly sampling with replacement from the observed data.
A sufficient statistic compresses all information in the sample about a parameter into a lower-dimensional summary without losing inferential power.
Empirical Risk Minimization (ERM) chooses a model that minimizes the average loss on the training data.
A confidence interval estimates a fixed but unknown parameter (like a population mean) with a range that would capture the true value in a long run of repeated samples.