πŸŽ“How I Study AIHISA
πŸ“–Read
πŸ“„PapersπŸ“°Blogs🎬Courses
πŸ’‘Learn
πŸ›€οΈPathsπŸ“šTopicsπŸ’‘Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way
Back to Concepts
πŸ“

Linear Algebra

Vectors, matrices, decompositions, and tensor operations β€” the language of deep learning computation.

15 concepts

Intermediate15

βˆ‘MathIntermediate

Vectors & Vector Spaces

A vector is an element you can add and scale, and a vector space is any collection of such elements closed under these operations.

#vector space#basis#span+12
βˆ‘MathIntermediate

Matrix Operations & Properties

Matrix operations like multiplication and transpose combine or reorient data tables and linear transformations in predictable ways.

#matrix multiplication#transpose#trace+12
βˆ‘MathIntermediate

Systems of Linear Equations

A system of linear equations asks for numbers that make several linear relationships true at the same time, which we compactly write as Ax = b.

#systems of linear equations#gaussian elimination#row echelon form+12
βˆ‘MathIntermediate

Inner Products & Norms

An inner product measures how much two vectors point in the same direction; in R^n it is the dot product.

#inner product#dot product#norm+12
βˆ‘MathIntermediate

Eigendecomposition

Eigendecomposition expresses a matrix as a change of basis times a diagonal scaling, revealing its natural stretching directions.

#eigendecomposition#eigenvalue#eigenvector+11
πŸ“šTheoryIntermediate

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) factors any mΓ—n matrix A into A = UΞ£V^{T}, where U and V are orthogonal and Ξ£ is diagonal with nonnegative entries.

#singular value decomposition#svd#truncated svd+12
βˆ‘MathIntermediate

Positive Definite Matrices

A real symmetric matrix A is positive definite if and only if x^T A x > 0 for every nonzero vector x, and positive semidefinite if x^T A x β‰₯ 0.

#positive definite#positive semidefinite#cholesky decomposition+11
βˆ‘MathIntermediate

Matrix Norms & Condition Numbers

Matrix norms measure the size of a matrix in different but related ways, with Frobenius treating entries like a big vector, spectral measuring the strongest stretch, and nuclear summing all singular values.

#matrix norm#spectral norm#frobenius norm+12
βˆ‘MathIntermediate

Tensor Operations

A tensor is a multi-dimensional array that generalizes scalars (0-D), vectors (1-D), and matrices (2-D) to higher dimensions.

#tensor#multi-dimensional array#broadcasting+12
βˆ‘MathIntermediate

Low-Rank Approximation

Low-rank approximation replaces a big matrix with one that has far fewer degrees of freedom while preserving most of its action.

#low-rank approximation#eckart-young theorem#svd+12
βˆ‘MathIntermediate

Matrix Calculus Fundamentals

Matrix calculus extends single-variable derivatives to matrices so we can differentiate functions built from matrix multiplications, traces, and norms.

#matrix calculus#frobenius norm#trace trick+12
βˆ‘MathIntermediate

Orthogonal & Unitary Matrices

Orthogonal (real) and unitary (complex) matrices are length- and angle-preserving transformations, like perfect rotations and reflections.

#orthogonal matrix#unitary matrix#conjugate transpose+12
βˆ‘MathIntermediate

Kronecker Product & Vec Operator

The Kronecker product A βŠ— B expands a small matrix into a larger block matrix by multiplying every entry of A with the whole matrix B.

#kronecker product#vec operator#block matrix+12
βš™οΈAlgorithmIntermediate

Sparse Matrices & Computation

A sparse matrix stores only its nonzero entries, saving huge amounts of memory when most entries are zero.

#sparse matrix#csr#csc+12
βˆ‘MathIntermediate

Pseudoinverse (Moore-Penrose)

The Moore–Penrose pseudoinverse generalizes matrix inversion to rectangular or singular matrices and is denoted A⁺.

#pseudoinverse#moore-penrose#least squares+12