Vectors, matrices, decompositions, and tensor operations β the language of deep learning computation.
15 concepts
A vector is an element you can add and scale, and a vector space is any collection of such elements closed under these operations.
Matrix operations like multiplication and transpose combine or reorient data tables and linear transformations in predictable ways.
A system of linear equations asks for numbers that make several linear relationships true at the same time, which we compactly write as Ax = b.
An inner product measures how much two vectors point in the same direction; in R^n it is the dot product.
Eigendecomposition expresses a matrix as a change of basis times a diagonal scaling, revealing its natural stretching directions.
Singular Value Decomposition (SVD) factors any mΓn matrix A into A = UΞ£V^{T}, where U and V are orthogonal and Ξ£ is diagonal with nonnegative entries.
A real symmetric matrix A is positive definite if and only if x^T A x > 0 for every nonzero vector x, and positive semidefinite if x^T A x β₯ 0.
Matrix norms measure the size of a matrix in different but related ways, with Frobenius treating entries like a big vector, spectral measuring the strongest stretch, and nuclear summing all singular values.
A tensor is a multi-dimensional array that generalizes scalars (0-D), vectors (1-D), and matrices (2-D) to higher dimensions.
Low-rank approximation replaces a big matrix with one that has far fewer degrees of freedom while preserving most of its action.
Matrix calculus extends single-variable derivatives to matrices so we can differentiate functions built from matrix multiplications, traces, and norms.
Orthogonal (real) and unitary (complex) matrices are length- and angle-preserving transformations, like perfect rotations and reflections.
The Kronecker product A β B expands a small matrix into a larger block matrix by multiplying every entry of A with the whole matrix B.
A sparse matrix stores only its nonzero entries, saving huge amounts of memory when most entries are zero.
The MooreβPenrose pseudoinverse generalizes matrix inversion to rectangular or singular matrices and is denoted AβΊ.