Chapter 13
Eigenvectors & Eigenvalues
Most vectors change direction when you apply a transformation. But some special vectors just get scaled -- they stay on their own line. Those are eigenvectors.
We've spent many chapters looking at what transformations do to space. They stretch it, rotate it, shear it, collapse it. We've measured area change with determinants, tracked what survives with rank, and identified what gets destroyed with null spaces. But we haven't asked one of the most revealing questions: are there any vectors that a transformation doesn't knock off course?
It turns out that for most transformations, there are. A few special directions go in, get scaled up or down (or flipped), and come back out pointing the same way. These are eigenvectors, and the scaling factors are eigenvalues. They reveal the skeleton of a transformation -- the directions along which the transformation acts most simply.
Most vectors change direction
Consider the matrix:
This is a shear-stretch: it doubles the -component and adds the -component to it, while leaving alone. Let's apply it to several vectors and see what happens.
The faint arrows are the original vectors; the bolder arrows show where they land after applying . Each one points in a different direction than it started. The transformation has knocked them off course.
The vector pointed straight up but landed at -- tilted to the right. The vector was at 45 degrees but landed at -- almost horizontal. The vector was steep but landed at -- much flatter. In every case, the direction shifted. That's what most transformations do to most vectors.
Eigenvectors stay on their line
Now look at the same transformation, but focus on two special directions. The vector -- pointing along the -axis -- and the vector .
Apply to each:
The vector got doubled in length but stayed on the -axis. The vector didn't change at all -- it came back as itself. Both stayed on their own lines. These are the eigenvectors of , with eigenvalues and .
The purple vectors are eigenvectors. The vector gets stretched to -- doubled in length, same direction. The vector maps to itself -- unchanged. The gray vector changes direction, like everything else. The faint purple lines show the eigenspaces: directions that the transformation preserves.
This is the core idea. While the transformation twists and shears most of the plane, there are preferred directions where its behavior is simple: just scaling. Finding those directions is what eigenvalue analysis is all about.
Eigenvalue = 2: pure scaling
Let's isolate what an eigenvalue means. Take an eigenvector with eigenvalue . When you apply the transformation, you get . The output is exactly twice the input -- same direction, double the length.
The faint arrow is the original eigenvector . The bold arrow is the result after applying the transformation: . It's exactly twice as long, pointing in exactly the same direction. The eigenvalue is just this scaling factor.
The eigenvalue is the answer to a simple question: "by how much does this vector get scaled?" If , the vector doubles. If , it triples. If , it shrinks to half. The direction never changes -- that's what makes it an eigenvector.
Eigenvalue = negative: flip and scale
But eigenvalues don't have to be positive. What if ? The vector gets scaled by : it shrinks to half its length and flips to point the opposite direction. It's still an eigenvector -- it stays on the same line through the origin. It just ends up on the other side.
The faint arrow is the original eigenvector . The bold arrow is the result: . It flipped direction and shrank to half its length. The eigenvalue captures both effects: the negative sign means "flip", the magnitude means "shrink by half". The vector still lies on the same purple line through the origin.
Negative eigenvalues show up in reflections and certain rotational-like transformations. The key insight is the same: the vector stays on its line. Whether it gets stretched, shrunk, flipped, or any combination -- as long as it doesn't rotate off that line, it's an eigenvector.
The formal bit
An eigenvector of a matrix is a nonzero vector such that:
The scalar is the corresponding eigenvalue. The equation says: applying to has the same effect as multiplying by a scalar. The transformation, which could do anything to space, acts on this particular direction as pure scaling.
To find eigenvalues, rearrange the equation:
This says is in the null space of . For a nonzero to exist, the matrix must be singular -- its determinant must be zero:
This is the characteristic equation. For a matrix :
This expands to a quadratic in :
The quantity is the trace (sum of the diagonal), and is the determinant. So the characteristic equation is:
Solve the quadratic, and you get the eigenvalues. Plug each eigenvalue back into to find the eigenvectors.
Let's verify with our matrix . The trace is , and the determinant is :
So and -- exactly the eigenvalues we found geometrically.
Worked example: repeated transformations in animation
In animation and physics simulations, you often apply a transformation repeatedly. Each frame, you multiply by the same matrix. What happens after 10 frames? 100? The answer depends entirely on the eigenvectors and eigenvalues.
Consider a transformation that stretches along one direction and shrinks along another:
This is a diagonal matrix, so the eigenvectors are obvious: with , and with .
Now start with an arbitrary vector, say . Apply repeatedly:
| Application | Result | What's happening |
|---|---|---|
| Starting point | ||
| Stretched horizontally, shrunk vertically | ||
| More so | ||
| The -component dominates | ||
| Almost purely horizontal |
After many applications, the vector points almost entirely in the direction -- the eigenvector with the largest eigenvalue. The component along the eigenvector shrinks exponentially (), while the component along grows exponentially ().
This is why eigenvectors matter in practice. They're the stable directions of a repeated transformation. In a physics simulation, objects stretch along the dominant eigenvector over time. In a Markov chain, the system converges toward the eigenvector with eigenvalue 1. In Google's PageRank, the steady-state distribution of web traffic is the dominant eigenvector of the link matrix.
In code, computing this iteratively looks like:
function applyRepeated(matrix, vector, steps) {
let v = [...vector];
for (let i = 0; i < steps; i++) {
v = [
matrix[0][0] * v[0] + matrix[0][1] * v[1],
matrix[1][0] * v[0] + matrix[1][1] * v[1]
];
}
return v;
}
const A = [[2, 0], [0, 0.5]];
applyRepeated(A, [1, 1], 10); // [1024, ~0.001]
After enough iterations, no matter what vector you started with (as long as it has some component in the dominant eigenvector direction), the result aligns with the eigenvector whose eigenvalue has the largest absolute value. The other directions fade away. That's the power of eigenvalue analysis -- it tells you the long-term behavior of a system without running the simulation.
Key Takeaway: Eigenvectors are the axes that survive a transformation -- they only get scaled, never rotated. Eigenvalues tell you how much they stretch. Finding them reveals the skeleton of any linear transformation: the directions where its behavior is simplest.
What's next
Eigenvectors point in directions that stay clean under a transformation. But what makes a set of vectors "clean" in general? When eigenvectors are perpendicular to each other, the math gets especially elegant -- decompositions become simple, and transformations factor into independent scalings along independent axes. That's orthogonality -- perpendicularity and its consequences.