Demystifying Key Concepts in Linear Algebra: From Matrix Multiplication to Orthogonality


Linear algebra is the backbone of machine learning, computer graphics, robotics, and so much more. But to unlock its full potential, you need to deeply understand its key ideas — not just memorize formulas.
In this article, we’ll walk through matrix multiplication, linear independence, rank, invertibility, and orthogonality — building intuition step by step.
1. Matrix Multiplication: More Than Just Numbers
Let’s say we have two matrices:
$$A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix}, \quad B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \\ \end{bmatrix}$$
The product AB is calculated as:
$$AB = \begin{bmatrix} 2×5+3×7 & 2×6+3×8 \\ 1×5+4×7 & 1×6+4×8 \end{bmatrix} = \begin{bmatrix} 31 & 36 \\ 33 & 38 \end{bmatrix}$$
But why do we do it this way?
Matrix multiplication is essentially combining linear transformations. Think of each matrix as a function that stretches, rotates, or squishes space. Composing two transformations = multiplying two matrices.
Matrix multiplication is not commutative:
$$AB \neq BA$$
Because the order of transformations matters!
2. Linear Independence: What Makes a Vector Special?
A set of vectors is linearly independent if no vector in the set can be written as a combination of the others.
For example, in 2D:
Vectors v1=[1,0] and v2=[0,1] are independent.
But v1=[1,2], v2=[2,4] are dependent — one is a scaled version of the other.
Intuition:
Independent vectors span unique directions. Dependent ones lie in the same “line” or plane.
This matters because the number of linearly independent columns tells us how much unique information the matrix contains.
3. Rank of a Matrix: Measuring Information
The rank of a matrix is the number of linearly independent rows or columns.
Full rank → all columns (or rows) are independent.
Rank-deficient → some are redundant.
rank(A)=dimension of column space
Why care?
It tells us how much of the space a matrix covers.
It affects solutions to linear systems:
If rank = number of variables → unique solution
If rank < number of variables → infinite or no solutions
4. Invertibility: Can We Undo the Transformation?
A square matrix A is invertible (also called nonsingular) if there exists another matrix A⁻¹ such that:
$$AA^{-1} = A^{-1}A = I$$
Where I is the identity matrix.
Key conditions for invertibility:
Matrix must be square
Full rank (columns must be linearly independent)
$$\text{A is invertible} \iff \text{det}(A) \neq 0 \iff \text{rank}(A) = n$$
If a matrix is not invertible, it's called singular, and you can’t "reverse" its transformation.
Why does it matter?
Invertible matrices let us solve systems, undo transformations, and stabilize models in machine learning.
5. Orthogonality: Perfect Perpendicularity
Two vectors are orthogonal if their dot product is 0:
$$\vec{u} \cdot \vec{v} = 0$$
Geometrically, they’re at a 90° angle.
Orthogonal vectors have magic properties:
No redundancy: they share no directional overlap
Easy projection: projections are just dot products
Numerical stability in computations
Special case: Orthonormal
Vectors are orthogonal and unit-length
Forms the basis of QR decomposition, PCA, and many ML algorithms
How They All Connect
Concept | What it tells you | Why it matters |
Matrix multiplication | Composition of transformations | Powers every ML pipeline & neural network |
Linear independence | Unique directional info | Affects rank, invertibility, solution space |
Rank | Number of independent dimensions | Determines solvability and data compression |
Invertibility | Reversibility of a transformation | Core to solving equations, model control |
Orthogonality | No shared direction, max separation | Leads to stable, efficient computation |
Real-World Example: PCA (Principal Component Analysis)
PCA finds orthogonal directions in your data that capture the most variance. The result?
A lower-rank approximation of your data
With independent, orthogonal axes
Built on matrix multiplication and eigenvectors
So all these concepts power a technique that drives dimensionality reduction in everything from face recognition to gene expression analysis.
Conclusion
Linear algebra is more than abstract math — it’s how we see, compress, and reason about the world in higher dimensions.
Understanding:
How matrices transform space,
Why independence and rank matter,
What makes a system invertible,
And why orthogonality is powerful...
...transforms you from a code writer into a systems thinker.
Subscribe to my newsletter
Read articles from Sudhin Karki directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Sudhin Karki
Sudhin Karki
I am a Machine Learning enthusiast with a motivation of building ML integrated apps. I am currently exploring the groundbreaking ML / DL papers and trying to understand how this is shaping the future.