MIT 18.06SC Linear Algebra

Author

Chao Ma

Published

November 29, 2025

All Lectures

My journey through MIT’s Linear Algebra course (18.06SC), focusing on building intuition and making connections between fundamental concepts. Each lecture note emphasizes geometric interpretation and practical applications.


Reflections & Synthesis

Deep Connections: Invertibility, Null Space, Independence, Rank, and Pivots Exploring how these fundamental concepts are different perspectives on information preservation.

From Taylor Expansion to Euler’s Formula: The Mathematical Foundation of Fourier Series The beautiful mathematical journey from Taylor expansions to Euler’s formula and ultimately to Fourier series, revealing deep connections between polynomials, exponentials, and trigonometric functions.


Unit I: Ax = b and the Four Subspaces

Lecture 1: The Geometry of Linear Equations Two powerful perspectives: row picture vs column picture.

Lecture 2: Elimination with Matrices The systematic algorithm that transforms linear systems into upper triangular form.

Lecture 3: Matrix Multiplication and Inverse Five different perspectives on matrix multiplication.

Lecture 4: LU Decomposition Factoring matrices into Lower × Upper triangular form.

Lecture 5.1: Permutation Matrices Permutation matrices reorder rows and columns.

Lecture 5.2: Transpose The transpose operation switches rows to columns.

Lecture 5.3: Vector Spaces Vector spaces and subspaces: closed under addition and scalar multiplication.

Lecture 6: Column Space and Null Space Column space determines which \(b\) make \(Ax = b\) solvable.

Lecture 7: Solving Ax=0 Systematic algorithm to find null space using pivot/free variables and RREF.

Lecture 8: Solving Ax=b Complete solution is particular solution plus null space.

Lecture 9: Independence, Basis, and Dimension Linear independence, basis, and dimension. Rank-nullity theorem.

Lecture 10: Four Fundamental Subspaces The four fundamental subspaces completely characterize any matrix.

Lecture 11: Matrix Spaces, Rank-1, and Graphs Matrix spaces as vector spaces, rank-1 matrices, dimension formulas.

Lecture 12: Graphs, Networks, and Incidence Matrices Applying linear algebra to graph theory: incidence matrices, Kirchhoff’s laws, and Euler’s formula.

Lecture 13: Quiz 1 Review Practice problems reviewing key concepts: subspaces, rank, null spaces, and the four fundamental subspaces.


Unit II: Least Squares, Determinants and Eigenvalues

Lecture 14: Orthogonal Vectors and Subspaces Orthogonal vectors, orthogonal subspaces, and the fundamental relationships between the four subspaces.

Lecture 15: Projection onto Subspaces Projecting onto lines and subspaces, the geometry of least squares, and deriving normal equations.

Lecture 16: Projection Matrices and Least Squares Projection matrices, least squares fitting, and the connection between linear algebra and calculus optimization.

Lecture 17: Orthogonal Matrices and Gram-Schmidt Orthonormal matrices, the Gram-Schmidt process for orthogonalization, and QR factorization.

Lecture 18: Properties of Determinants Ten fundamental properties that completely define the determinant and reveal when matrices are invertible.

Lecture 19: Determinant Formulas and Cofactors Three computational methods for determinants: pivots, the big formula, and cofactor expansion.

Lecture 20: Inverse & Volume The inverse matrix formula using cofactors, Cramer’s rule for solving linear systems, and the geometric interpretation of determinants as volume.

Lecture 21: Eigenvalues and Eigenvectors The directions that matrices can only scale, not rotate: \(Ax = \lambda x\).

Lecture 22: Diagonalization and Powers of A Computing matrix powers efficiently and solving Fibonacci with eigenvalues.

Lecture 23: Differential Equations and exp(At) Connecting linear algebra to differential equations: how eigenvalues determine stability and matrix exponentials solve systems.

Lecture 24: Markov Matrices and Fourier Series Two powerful applications of eigenvalues: Markov matrices for modeling state transitions and Fourier series for decomposing functions into orthogonal components.


Unit III: Positive Definite Matrices and Applications

Lecture 25: Symmetric Matrices and Positive Definiteness The beautiful structure of symmetric matrices: real eigenvalues, orthogonal eigenvectors, spectral decomposition, and the important concept of positive definiteness.

Lecture 26: Complex Matrices and Fast Fourier Transform Extending linear algebra to complex vectors: Hermitian matrices, unitary matrices, and the Fast Fourier Transform algorithm that reduces DFT complexity from O(N²) to O(N log N).

Lecture 27: Positive Definite Matrices and Minima Connecting positive definite matrices to multivariable calculus and optimization: the Hessian matrix, second derivative tests, and the geometric interpretation of quadratic forms as ellipsoids.

Lecture 28: Similar Matrices and Jordan Form When matrices share eigenvalues but differ in structure: similar matrices represent the same transformation in different bases, and Jordan form reveals the canonical structure when diagonalization fails.

Lecture 29: Singular Value Decomposition The most important matrix factorization: SVD provides orthonormal bases for all four fundamental subspaces, revealing how any matrix maps its row space to its column space through scaling by singular values.

Lecture 30: Linear Transformations and Their Matrices The fundamental connection between abstract linear transformations and concrete matrices: every linear transformation can be represented as matrix multiplication, and the choice of basis determines the matrix representation.

Lecture 31: Change of Basis and Image Compression How choosing the right basis enables compression: JPEG transforms 512×512 images (262,144 pixels) to Fourier basis and discards small coefficients. Three properties of good bases—fast inverse (FFT in O(n log n)), sparsity (few coefficients enough), and orthogonality (no redundancy).

Lecture 33: Left and Right Inverse; Pseudo-inverse When matrices aren’t square: full column rank matrices (\(r = n < m\)) have left inverses \((A^T A)^{-1} A^T\), full row rank matrices (\(r = m < n\)) have right inverses \(A^T (AA^T)^{-1}\), and the pseudo-inverse \(A^+ = V \Sigma^+ U^T\) generalizes both using SVD—inverting non-zero singular values and transposing the shape.