๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts95

Groups

๐Ÿ“Linear Algebra15๐Ÿ“ˆCalculus & Differentiation10๐ŸŽฏOptimization14๐ŸŽฒProbability Theory12๐Ÿ“ŠStatistics for ML9๐Ÿ“กInformation Theory10๐Ÿ”บConvex Optimization7๐Ÿ”ขNumerical Methods6๐Ÿ•ธGraph Theory for Deep Learning6๐Ÿ”ตTopology for ML5๐ŸŒDifferential Geometry6โˆžMeasure Theory & Functional Analysis6๐ŸŽฐRandom Matrix Theory5๐ŸŒŠFourier Analysis & Signal Processing9๐ŸŽฐSampling & Monte Carlo Methods10๐Ÿง Deep Learning Theory12๐Ÿ›ก๏ธRegularization Theory11๐Ÿ‘๏ธAttention & Transformer Theory10๐ŸŽจGenerative Model Theory11๐Ÿ”ฎRepresentation Learning10๐ŸŽฎReinforcement Learning Mathematics9๐Ÿ”„Variational Methods8๐Ÿ“‰Loss Functions & Objectives10โฑ๏ธSequence & Temporal Models8๐Ÿ’ŽGeometric Deep Learning8

Category

๐Ÿ”ทAllโˆ‘Mathโš™๏ธAlgo๐Ÿ—‚๏ธDS๐Ÿ“šTheory

Level

AllBeginnerIntermediate
๐Ÿ“šTheoryIntermediate

Topological Data Analysis (TDA)

Topological Data Analysis (TDA) studies the shape of data using tools from algebraic topology, producing summaries like Betti numbers, barcodes, and persistence diagrams.

#topological data analysis#persistent homology#vietorisโ€“rips complex+12
๐Ÿ“šTheoryIntermediate

Graph Isomorphism & WL Test

Graph isomorphism asks whether two graphs are the same up to renaming vertices; the Weisfeilerโ€“Leman (WL) test is a powerful heuristic that often distinguishes non-isomorphic graphs quickly.

34567
Advanced
#weisfeiler-leman
#color refinement
#graph isomorphism
+10
๐Ÿ“šTheoryIntermediate

Message Passing Framework

Message Passing Neural Networks (MPNNs) learn on graphs by letting nodes repeatedly exchange and aggregate messages from their neighbors.

#message passing neural network#mpnn#graph neural network+12
๐Ÿ“šTheoryIntermediate

Cross-Entropy

Cross-entropy measures how well a proposed distribution Q predicts outcomes actually generated by a true distribution P.

#cross-entropy#entropy#kl divergence+12
๐Ÿ“šTheoryIntermediate

KL Divergence

KL divergence measures how much information is lost when using model Q to approximate the true distribution P.

#kl divergence#relative entropy#cross-entropy+12
๐Ÿ“šTheoryIntermediate

Empirical Risk Minimization

Empirical Risk Minimization (ERM) chooses a model that minimizes the average loss on the training data.

#empirical risk minimization#expected risk#loss function+12
๐Ÿ“šTheoryIntermediate

Bayesian Inference

Bayesian inference updates prior beliefs with observed data to produce a posterior distribution P(\theta\mid D).

#bayesian inference#posterior#prior+12
๐Ÿ“šTheoryIntermediate

Loss Landscape Analysis

A loss landscape is the โ€œterrainโ€ of a modelโ€™s loss as you move through parameter space; valleys are good solutions and peaks are bad ones.

#loss landscape#sharpness#hessian eigenvalues+12
๐Ÿ“šTheoryIntermediate

Weight Initialization Strategies

Weight initialization sets the starting values of neural network parameters so signals and gradients neither explode nor vanish as they pass through layers.

#xavier#glorot#he+12
๐Ÿ“šTheoryIntermediate

Automatic Differentiation

Automatic differentiation (AD) computes exact derivatives by systematically applying the chain rule to your program, not by symbolic algebra or numerical differences.

#automatic differentiation#dual numbers#forward mode+12
๐Ÿ“šTheoryIntermediate

Concentration Inequalities

Concentration inequalities give high-probability bounds that random outcomes stay close to their expectations, even without knowing the full distribution.

#concentration inequalities#hoeffding inequality#chernoff bound+12
๐Ÿ“šTheoryIntermediate

Markov Chain Theory

A Markov chain is a random process where the next state depends only on the current state, not the full history.

#markov chain#transition matrix#stationary distribution+12