🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way
Back to Concepts
🔮

Representation Learning

Theory of learned representations: embeddings, contrastive learning, disentanglement, and feature space geometry.

10 concepts

Intermediate6

📚TheoryIntermediate

Embedding Spaces & Distributed Representations

Embedding spaces map discrete things like words or products to dense vectors so that similar items are close together.

#embeddings#dense vectors#cosine similarity+12
📚TheoryIntermediate

Contrastive Learning

Contrastive learning teaches models by pulling together similar examples (positives) and pushing apart dissimilar ones (negatives).

#contrastive learning#infonce#nt-xent+12
📚TheoryIntermediate

Self-Supervised Learning Theory

Self-supervised learning (SSL) teaches models to learn useful representations from unlabeled data by solving proxy tasks created directly from the data.

#self-supervised learning#contrastive learning#infonce+12
📚TheoryIntermediate

Metric Learning

Metric learning is about automatically learning a distance function so that similar items are close and dissimilar items are far in a feature space.

#metric learning#mahalanobis distance#contrastive loss+12
⚙️AlgorithmIntermediate

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) finds new orthogonal axes (principal components) that capture the maximum variance in your data.

#principal component analysis#pca c++#eigendecomposition+11
⚙️AlgorithmIntermediate

t-SNE & UMAP

t-SNE and UMAP are nonlinear dimensionality-reduction methods that preserve local neighborhoods to make high-dimensional data visible in 2D or 3D.

#t-sne#umap#dimensionality reduction+12

Advanced4

📚TheoryAdvanced

Disentangled Representations

Disentangled representations aim to encode independent factors of variation (like shape, size, or color) into separate coordinates of a latent vector.

#disentangled representations#independent factors#total correlation+12
📚TheoryAdvanced

Transfer Learning Theory

Transfer learning theory studies when and why a model trained on a source distribution will work on a different target distribution.

#transfer learning#domain adaptation#hΔh-divergence+12
📚TheoryAdvanced

Neural Collapse

Neural Collapse describes what happens at the end of training: the penultimate-layer features of each class concentrate tightly around a class mean.

#neural collapse#simplex etf#equiangular tight frame+12
📚TheoryAdvanced

Manifold Learning

Manifold learning assumes high-dimensional data actually lies near a much lower-dimensional, smoothly curved surface embedded in a higher-dimensional space.

#manifold learning#isomap#locally linear embedding+12