🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Concepts187

Groups

📐Linear Algebra15📈Calculus & Differentiation10🎯Optimization14🎲Probability Theory12📊Statistics for ML9📡Information Theory10🔺Convex Optimization7🔢Numerical Methods6🕸Graph Theory for Deep Learning6🔵Topology for ML5🌐Differential Geometry6∞Measure Theory & Functional Analysis6🎰Random Matrix Theory5🌊Fourier Analysis & Signal Processing9🎰Sampling & Monte Carlo Methods10🧠Deep Learning Theory12🛡️Regularization Theory11👁️Attention & Transformer Theory10🎨Generative Model Theory11🔮Representation Learning10🎮Reinforcement Learning Mathematics9🔄Variational Methods8📉Loss Functions & Objectives10⏱️Sequence & Temporal Models8💎Geometric Deep Learning8

Category

🔷All∑Math⚙️Algo🗂️DS📚Theory

Level

AllBeginnerIntermediateAdvanced
⚙️AlgorithmIntermediate

Learning Rate Schedules

Learning rate schedules control how fast a model learns over time by changing the learning rate across iterations or epochs.

#learning rate schedules#step decay#cosine annealing+12
⚙️AlgorithmIntermediate

Adam & Adaptive Methods

Adam is an optimization algorithm that combines momentum (first moment) with RMSProp-style adaptive learning rates (second moment).

#adam
23456
#adaptive methods
#rmsprop
+12
⚙️AlgorithmIntermediate

Momentum Methods

Momentum methods add an exponentially weighted memory of past gradients to make descent steps smoother and faster, especially in ravines and ill-conditioned problems.

#momentum#heavy-ball#polyak momentum+12
⚙️AlgorithmIntermediate

Stochastic Gradient Descent (SGD)

Stochastic Gradient Descent (SGD) updates model parameters using small random subsets (mini-batches) of data, making learning faster and more memory-efficient.

#stochastic gradient descent#mini-batch#random shuffling+12
⚙️AlgorithmIntermediate

Gradient Descent

Gradient descent is a simple, repeatable way to move downhill on a loss surface by stepping in the opposite direction of the gradient.

#gradient descent#batch gradient descent#learning rate+12
⚙️AlgorithmIntermediate

Constructive Algorithm Techniques

Constructive algorithms build a valid answer directly by following a recipe, rather than searching exhaustively.

#constructive algorithm#greedy construction#invariant+12
⚙️AlgorithmIntermediate

When to Use Binary Search on Answer

Binary search on answer applies when the feasibility of a candidate value is monotonic: if a value works, then all larger (or smaller) values also work.

#binary search on answer#parametric search#monotone predicate+12
⚙️AlgorithmIntermediate

Problem Classification Patterns

Many competitive programming problems map to a small set of classic patterns; recognizing keywords and constraints lets you pick the right tool fast.

#problem classification#binary search on answer#two pointers+12
⚙️AlgorithmIntermediate

Proof Techniques for Greedy Algorithms

Greedy algorithm correctness is usually proved with patterns like exchange argument, stays-ahead, structural arguments, cut-and-paste, and contradiction.

#greedy algorithms#exchange argument#stays ahead+12
⚙️AlgorithmIntermediate

Complexity Analysis Quick Reference

Use an operation budget of about 10^8 simple operations per second on typical online judges; always multiply by the time limit and number of test files if known.

#time complexity#competitive programming#big-o+12
⚙️AlgorithmIntermediate

Modular Arithmetic Pitfalls

Modular arithmetic is about working with remainders, but programming languages often return negative remainders, so always normalize with (a % MOD + MOD) % MOD.

#modular arithmetic#modular inverse#fermats little theorem+12
⚙️AlgorithmIntermediate

Debugging Strategies for CP

Systematic debugging beats guesswork: always re-read the statement, re-check constraints, and verify the output format before touching code.

#competitive programming#debugging#stress testing+12