Random variables, distributions, and Bayesian reasoning β foundational for understanding uncertainty in ML.
12 concepts
Kolmogorovβs axioms define probability as a measure on events: non-negativity, normalization, and countable additivity.
A random variable maps uncertain outcomes to numbers and is described by a distribution that assigns likelihoods to values or ranges.
Bayes' Theorem tells you how to update the probability of a hypothesis after seeing new evidence.
Expectation is the long-run average value of a random variable and acts like the balance point of its distribution.
A multivariate Gaussian (normal) distribution models a vector of real-valued variables with a bell-shaped probability hill in many dimensions.
A Markov chain models a system that moves between states where the next step depends only on the current state, not the past.
Exponential family distributions express many common probability models in a single template p(x|Ξ·) = h(x) exp(Ξ·^T T(x) β A(Ξ·)).
Concentration inequalities give high-probability bounds that random outcomes stay close to their expectations, even without knowing the full distribution.
The Central Limit Theorem (CLT) says that the sum or average of many independent, identically distributed variables with finite variance becomes approximately normal (Gaussian).
The Weak Law of Large Numbers (WLLN) says that the sample average of independent, identically distributed (i.i.d.) random variables with finite mean gets close to the true mean with high probability as the sample size grows.