🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts95

Groups

📐Linear Algebra15📈Calculus & Differentiation10🎯Optimization14🎲Probability Theory12📊Statistics for ML9📡Information Theory10🔺Convex Optimization7🔢Numerical Methods6🕸Graph Theory for Deep Learning6🔵Topology for ML5🌐Differential Geometry6∞Measure Theory & Functional Analysis6🎰Random Matrix Theory5🌊Fourier Analysis & Signal Processing9🎰Sampling & Monte Carlo Methods10🧠Deep Learning Theory12🛡️Regularization Theory11👁️Attention & Transformer Theory10🎨Generative Model Theory11🔮Representation Learning10🎮Reinforcement Learning Mathematics9🔄Variational Methods8📉Loss Functions & Objectives10⏱️Sequence & Temporal Models8💎Geometric Deep Learning8

Category

🔷All∑Math⚙️Algo🗂️DS📚Theory

Level

AllBeginnerIntermediate
📚TheoryIntermediate

Minimum Description Length (MDL)

Minimum Description Length (MDL) picks the model that compresses the data best by minimizing L(M) + L(D|M).

#minimum description length#mdl#bic+12
📚TheoryIntermediate

Message Passing on Meshes & Point Clouds

Message passing treats meshes and point clouds as graphs where nodes exchange information with neighbors to learn useful features.

#geometric deep learning
12345
Advanced
#message passing
#pointnet
+12
📚TheoryIntermediate

Group Convolution

Group convolution combines two functions defined on a group by summing over products aligned by the group operation, generalizing the usual circular convolution on integers modulo n.

#group convolution#finite group#circular convolution+10
📚TheoryIntermediate

Equivariance & Invariance

Equivariance means that applying a transformation before a function is the same as applying a corresponding transformation after the function.

#equivariance#invariance#group action+12
📚TheoryIntermediate

Sequence-to-Sequence with Attention

Sequence-to-sequence with attention lets a decoder focus on the most relevant parts of the input at each output step, rather than compressing everything into a single vector.

#sequence-to-sequence#attention#encoder-decoder+12
📚TheoryIntermediate

Temporal Convolutions

Temporal (causal) convolution computes each output at time t using only the current and past inputs, ensuring no future information leakage.

#temporal convolution#causal convolution#fir filter+12
📚TheoryIntermediate

LSTM & Gating Mechanisms

Long Short-Term Memory (LSTM) networks use gates (forget, input, and output) to control what information to erase, write, and reveal at each time step.

#lstm#forget gate#input gate+11
📚TheoryIntermediate

Recurrent Neural Network Theory

A Recurrent Neural Network (RNN) processes sequences by carrying a hidden state that is updated at every time step using h_t = f(W_h h_{t-1} + W_x x_t + b).

#recurrent neural network#rnn#backpropagation through time+12
📚TheoryIntermediate

Multi-Task Loss Balancing

Multi-task loss balancing aims to automatically set each task’s weight so that no single loss dominates training.

#multi-task learning#uncertainty weighting#homoscedastic uncertainty+12
📚TheoryIntermediate

Knowledge Distillation Loss

Knowledge distillation loss blends standard hard-label cross-entropy with a soft distribution match from a teacher using a temperature parameter.

#knowledge distillation#kd loss#temperature scaling+12
📚TheoryIntermediate

Perceptual Loss & Feature Matching

Perceptual loss compares images in a deep network's feature space rather than raw pixels, which aligns better with human judgment of similarity.

#perceptual loss#feature matching#gan+12
📚TheoryIntermediate

Triplet Loss & Contrastive Loss

Triplet loss and contrastive loss are metric-learning objectives that teach a model to map similar items close together and dissimilar items far apart in an embedding space.

#triplet loss#contrastive loss#metric learning+12