Chapter 2
Linear Combinations & Span
The Question
You learned in Chapter 1 that you can scale a vector and add two vectors together. Now put both operations together: scale each vector by some amount, then add the results. That's called a linear combination.
Here's the question that should nag at you: if you have two vectors, what's the full set of points in space you can reach by scaling and adding them?
Not just one combination. All combinations. Every possible pair of scalars. What shape does that set trace out?
The answer is called the span of those vectors, and it tells you everything about what your vectors can do.
Building Combinations
Let's start with two vectors: in blue and in green. A linear combination looks like for any scalars and .
Pick a few specific values of and and plot where you land.
Each orange dot is a linear combination . The dashed lines show each vector's contribution. Scale each vector by any amount, then add.
Those six dots are just six specific combinations. But you can choose any real numbers for and . Infinitely many choices, infinitely many points. Where do they all land?
When Vectors Are Parallel
Before answering that, let's look at a simpler case. What if your two vectors point in the same direction?
Take and . Notice that -- they're parallel.
When both vectors point in the same direction, every combination lands on a single line through the origin. Scaling just gives you a differently scaled version of . The span is one-dimensional.
Any combination simplifies to . That's just a single scalar times . You're stuck on one line no matter what you do. The second vector gives you no new reach.
The Full Span of Two Non-Parallel Vectors
Now back to the interesting case. When and point in different directions, something powerful happens. You can reach any point in the entire 2D plane.
Think about it: pick any target point. You need to solve for and . Because the vectors aren't parallel, they form two independent "knobs" you can dial -- one pushes you along 's direction, the other along 's. Between the two, you can steer to any destination.
Two non-parallel vectors span the entire 2D plane. The light orange shading represents every reachable point. You can reach any position by dialing the right pair of scalars.
This is the key insight. Two non-parallel vectors in 2D give you full coverage. One vector gives you a line. Two non-parallel vectors give you the whole plane. Adding more vectors to the mix won't expand your reach any further -- you already have everywhere.
The Redundant Vector
So what happens when you throw in a third vector? If the first two already span the whole plane, the third one must land somewhere in that plane. It's already reachable. It adds nothing.
Let's say . Can we write as a combination of and ? Yes: .
is already reachable from and . The dashed lines show the parallelogram: go along then , or then -- either way you arrive at . The third vector is redundant -- it doesn't expand the span.
This is called linear dependence. The set is linearly dependent because one of them is a combination of the others. It's a passenger, not a driver.
The Formal Bit
Time to put precise language on these ideas.
Linear Combination
A linear combination of vectors is any expression of the form:
where are real numbers (called scalars or coefficients). You scale each vector, then add them up.
Span
The span of a set of vectors is the collection of all linear combinations you can form:
In plain English: span is everywhere you can reach.
- One non-zero vector spans a line through the origin.
- Two non-parallel vectors in 2D span the entire plane.
- Three non-coplanar vectors in 3D span all of 3D space.
Linear Dependence and Independence
A set of vectors is linearly dependent if at least one vector in the set can be written as a linear combination of the others. Equivalently, there exist scalars , not all zero, such that:
A set is linearly independent if no vector is redundant. Each one expands the reach. Removing any one of them shrinks the span.
The visual test: if a vector already lives inside the span of the others, it's dependent. If it points "out of" the current span into a new dimension, it's independent.
Worked Example: RGB Color Mixing
Here's a concrete application every programmer knows. Think of colors as vectors in 3D space, where the three axes are Red, Green, and Blue intensity (each from 0 to 1).
Define three color vectors:
What does the span of just Red and Green look like? It's every combination -- a flat plane in 3D color space. You get:
- -- yellow
- -- dark orange
- -- dark green
Lots of colors, but no blues, purples, or anything with a blue component. You're stuck on a plane.
Add Blue. Now is all of 3D color space. Every displayable color is a combination of these three. Blue is linearly independent of Red and Green -- it points in a direction you couldn't reach before.
Now define Yellow: .
Does adding Yellow as a fourth "basis color" let you mix any new colors? No. Yellow is already in the span of Red and Green. It's linearly dependent. -- the same set. Adding a redundant vector changes nothing.
This is exactly why monitors use three primary colors: three linearly independent vectors span 3D color space. A fourth primary would be redundant.
// In code: any color is a linear combination of RGB
const R = [1, 0, 0];
const G = [0, 1, 0];
const B = [0, 0, 1];
// Yellow = R + G — already in span(R, G)
const yellow = R.map((v, i) => 1 * v + 1 * G[i] + 0 * B[i]);
// [1, 1, 0]
// Purple needs Blue — not in span(R, G) alone
const purple = R.map((v, i) => 0.5 * v + 0 * G[i] + 0.5 * B[i]);
// [0.5, 0, 0.5]
Key Takeaway: Span is the set of all places you can reach by scaling and adding. Linearly dependent vectors are redundant passengers -- they don't expand your reach.
What's Next
We know what combinations of vectors look like and how to tell whether a vector set is redundant. But what if we had a machine that takes every vector in space and moves it somewhere new -- systematically? Stretching, rotating, squishing the whole plane at once, while keeping grid lines straight and evenly spaced? That's a linear transformation, and it's the subject of Chapter 3.