🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts356

Groups

📐Linear Algebra15📈Calculus & Differentiation10🎯Optimization14🎲Probability Theory12📊Statistics for ML9📡Information Theory10🔺Convex Optimization7🔢Numerical Methods6🕸Graph Theory for Deep Learning6🔵Topology for ML5🌐Differential Geometry6∞Measure Theory & Functional Analysis6🎰Random Matrix Theory5🌊Fourier Analysis & Signal Processing9🎰Sampling & Monte Carlo Methods10🧠Deep Learning Theory12🛡️Regularization Theory11👁️Attention & Transformer Theory10🎨Generative Model Theory11🔮Representation Learning10🎮Reinforcement Learning Mathematics9🔄Variational Methods8📉Loss Functions & Objectives10⏱️Sequence & Temporal Models8💎Geometric Deep Learning8

Category

🔷All∑Math⚙️Algo🗂️DS📚Theory

Level

AllBeginnerIntermediate
📚TheoryIntermediate

Value Function Approximation

Value function approximation replaces a huge table of values with a small set of parameters that can generalize across similar states.

#reinforcement learning#value function approximation#linear function approximator+12
⚙️AlgorithmIntermediate

PPO & Trust Region Methods

Proximal Policy Optimization (PPO) stabilizes policy gradient learning by preventing each update from moving the policy too far from the previous one.

23456
Advanced
#ppo
#trust region
#trpo
+11
⚙️AlgorithmIntermediate

Temporal Difference Learning

Temporal Difference (TD) Learning updates value estimates by bootstrapping from the next state's current estimate, enabling fast, online learning.

#temporal difference learning#td(0)#sarsa+12
∑MathIntermediate

Markov Decision Processes (MDP)

A Markov Decision Process (MDP) models decision-making in situations where outcomes are partly random and partly under the control of a decision maker.

#markov decision process#value iteration#policy iteration+12
⚙️AlgorithmIntermediate

t-SNE & UMAP

t-SNE and UMAP are nonlinear dimensionality-reduction methods that preserve local neighborhoods to make high-dimensional data visible in 2D or 3D.

#t-sne#umap#dimensionality reduction+12
⚙️AlgorithmIntermediate

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) finds new orthogonal axes (principal components) that capture the maximum variance in your data.

#principal component analysis#pca c++#eigendecomposition+11
📚TheoryIntermediate

Metric Learning

Metric learning is about automatically learning a distance function so that similar items are close and dissimilar items are far in a feature space.

#metric learning#mahalanobis distance#contrastive loss+12
📚TheoryIntermediate

Self-Supervised Learning Theory

Self-supervised learning (SSL) teaches models to learn useful representations from unlabeled data by solving proxy tasks created directly from the data.

#self-supervised learning#contrastive learning#infonce+12
📚TheoryIntermediate

Contrastive Learning

Contrastive learning teaches models by pulling together similar examples (positives) and pushing apart dissimilar ones (negatives).

#contrastive learning#infonce#nt-xent+12
📚TheoryIntermediate

Embedding Spaces & Distributed Representations

Embedding spaces map discrete things like words or products to dense vectors so that similar items are close together.

#embeddings#dense vectors#cosine similarity+12
📚TheoryIntermediate

Classifier-Free Guidance

Classifier-Free Guidance (CFG) steers diffusion sampling toward a condition (like a text prompt) without needing a separate classifier.

#classifier-free guidance#diffusion models#epsilon prediction+11
📚TheoryIntermediate

Flow Matching

Flow matching learns a time-dependent vector field v_t(x, c) whose ODE transports simple noise to complex data, enabling fast, deterministic sampling.

#flow matching#conditional flow matching#rectified flow+11