Chapter 9
Null Space
If a transformation collapses some vectors to zero, which ones? And why should you care?
Every transformation we've seen so far has done something to every vector -- stretched it, rotated it, reflected it. But some transformations are more destructive. They crush an entire dimension down to nothing. A 2D plane gets flattened to a line. A 3D space gets squashed to a plane. Information is lost, and it's lost permanently.
The null space is the set of all vectors that get sent to zero. It tells you exactly what information the transformation destroys. Understanding it is the key to knowing when a system of equations has unique solutions, when it has infinitely many, and when your matrix is quietly throwing away data you need.
A rank-1 transformation
Consider the matrix:
This matrix has rank 1. Its two columns are proportional -- the second column is just twice the first. That means the entire 2D plane gets collapsed onto a single line. Every output is some multiple of .
The orange line below is the column space -- the line everything gets mapped onto. The purple line is the null space -- the direction that gets completely destroyed. Every vector along the purple line maps to the origin.
The blue and green arrows (where and land) point in the same direction -- the columns are proportional. The entire plane collapses onto this single orange line. The purple direction gets annihilated.
Notice that lands at and lands at . The two columns point the same way. Two independent directions got merged into one. That's rank 1 -- only one linearly independent column survives.
Multiple vectors mapping to zero
Let's verify that the null space works the way we claim. The null space direction for this matrix is . Let's check:
And any scalar multiple of also maps to zero. The vector ? Zero. The vector ? Zero. The vector ? Zero. Every vector along that purple line gets squished to the origin.
Five different vectors, all multiples of , all along the same purple line. The dashed arrows show where they end up after the transformation: the origin. Every single one gets annihilated.
This is the null space in action. It's not just one vector that maps to zero -- it's an entire subspace. Every vector in the null space carries information that the transformation cannot see.
Rank-nullity theorem visually
Here's the deep result that ties everything together. In our 2D example:
- The column space has dimension 1 (the orange line -- what the transformation can produce)
- The null space has dimension 1 (the purple line -- what the transformation destroys)
- Together: -- the dimension we started with
This isn't a coincidence. It's the rank-nullity theorem: the rank (dimension of the column space) plus the nullity (dimension of the null space) always equals the number of columns in the matrix.
Rank measures what survives. Nullity measures what's lost. Together they equal the dimension you started with.
Think of it as a conservation law. The transformation has to account for every dimension of the input space. Each dimension either contributes to the output (rank) or gets annihilated (nullity). Nothing is unaccounted for:
where is the number of columns -- the dimension of the input space.
The formal bit
The null space (also called the kernel) of a matrix is the set of all vectors that maps to the zero vector:
Some key facts:
-
The null space is always a subspace -- it contains the zero vector, and it's closed under addition and scalar multiplication. If and , then too.
-
Rank-nullity theorem: , where is the number of columns in . The rank is the dimension of the column space (the range). The nullity is the dimension of the null space.
-
Full rank means the null space is just -- only the zero vector maps to zero. This means the transformation is injective (one-to-one): distinct inputs always produce distinct outputs. No information is lost.
-
Less than full rank means the null space is nontrivial -- there are nonzero vectors that get sent to zero. The transformation is not injective: multiple inputs produce the same output.
The null space tells you the ambiguity in solutions to . If you find one solution , then is also a solution for any in the null space, because . The null space is the "wiggle room" -- all the different inputs that the transformation can't tell apart.
Worked example: invisible light directions
Imagine you're writing a renderer. You have a surface described by its normal vectors at various points, and you're computing how light from different directions affects pixel brightness. You might represent this with a matrix where each row describes how a particular pixel responds to each component of the light direction:
Suppose your matrix is:
This is a matrix (2 pixels, 3 light direction components). The rank is 1 -- both rows are proportional. The null space has dimension . That means there's a 2D plane of light directions that produce zero brightness change.
You can find the null space by solving :
These are the same equation: , and is free. So the null space is:
The first null vector says: if you increase light in the -direction and decrease it equally in the -direction, brightness doesn't change. The second says: light along the -axis has zero effect on these pixels.
These are invisible directions -- you could change the lighting along them and the rendered image wouldn't budge. In practice, this tells you your lighting setup is degenerate: you can't reconstruct the full 3D light direction from just these two pixel measurements because two of the three degrees of freedom are invisible to the sensor.
This is exactly the kind of analysis that matters in computer vision, signal processing, and machine learning. When your model has a nontrivial null space, there are inputs it literally cannot distinguish. Knowing what those inputs are tells you the limits of what your system can see.
Key Takeaway: The null space is everything that gets squished to zero. A bigger null space means more information is lost in the transformation. Rank measures what survives, nullity measures what's lost, and together they always add up to the number of dimensions you started with.
What's next
We've been looking at transformations from the outside -- what they do to space. Now let's look at a fundamental operation between individual vectors: the dot product. It turns out to connect deeply to everything we've seen -- from projections to transformations to the very idea of measuring how much two vectors agree.