Chapter 15
Orthogonal Matrices & Gram-Schmidt
Some transformations preserve distances and angles perfectly -- no stretching, no squishing. These are orthogonal matrices, and they're built from orthonormal vectors.
Most matrices distort space. They stretch some directions, compress others, skew angles. But a special class of matrices does none of that. An orthogonal matrix transforms space rigidly -- it can rotate and reflect, but every distance and every angle comes through unchanged. If you apply an orthogonal transformation to a shape, it comes out looking exactly like the original, just moved.
This isn't just elegant mathematics. Orthogonal matrices are everywhere in practice: rotation matrices in 3D graphics, the factor in QR decomposition, the basis vectors of a camera's coordinate system, and the orthonormal bases used in signal processing. Understanding them -- and knowing how to build them with Gram-Schmidt -- is essential.
Orthogonal matrices preserve the unit circle
The best way to see what orthogonal matrices do is to watch them act on the unit circle. Take every point at distance 1 from the origin and transform it. An orthogonal matrix keeps every point at distance 1 -- the circle stays a perfect circle. A non-orthogonal matrix distorts it into an ellipse.
On the left, a 45-degree rotation (orthogonal). On the right, a stretch by 2 along and compression by 0.5 along (not orthogonal). The rotation preserves the circle perfectly. The scaling destroys it.
Left: a rotation matrix maps the unit circle to itself -- distances are perfectly preserved. Right: a non-orthogonal scaling matrix stretches the circle into an ellipse, distorting all distances.
This is the defining visual property of orthogonal matrices. They transform space rigidly. The unit circle is the set of all vectors with length 1, and an orthogonal matrix maps every length-1 vector to another length-1 vector. No direction gets stretched or compressed.
Q^TQ = I visually
What makes a matrix orthogonal? Its columns must be orthonormal -- each column is a unit vector, and any two distinct columns are perpendicular. When you stack these column vectors into a matrix , the product gives you the identity matrix.
Why? The entry of is the dot product of column and column . If the columns are orthonormal, each column dotted with itself gives 1 (they're unit vectors), and any column dotted with a different column gives 0 (they're perpendicular). That's exactly the identity matrix.
Here are the two columns of a rotation matrix -- and -- shown as vectors. They have length 1, and the right angle between them is marked explicitly.
The columns of are unit vectors at right angles. When you compute , each diagonal entry is a column dotted with itself (giving 1), and each off-diagonal entry is a column dotted with a different column (giving 0). That's the identity matrix.
This is why orthogonal matrices are so convenient: . You never need to compute an inverse the hard way. Just transpose. This makes orthogonal matrices computationally cheap to invert -- a free operation in terms of data, since transposing just reinterprets the same numbers.
Gram-Schmidt step 1: the projection
Now the practical question: how do you build an orthonormal basis? You start with whatever vectors you have -- possibly skewed, non-perpendicular, different lengths -- and systematically straighten them out.
The Gram-Schmidt process does this one vector at a time. Start with two linearly independent vectors and . They're not perpendicular (their dot product is ) and they're not unit length.
Step 1: Keep as is (we'll call it ). Then compute the projection of onto :
This projection is the component of that lies along -- the part we need to remove to make perpendicular.
The orange vector is the projection of onto -- the "shadow" of along the direction. The dashed line from the tip of down to the projection tip drops at a right angle to .
The projection captures how much of is "in the direction of" . The rest -- the perpendicular part -- is what we keep.
Gram-Schmidt step 2: subtract and orthogonalize
Step 2: Subtract the projection from to get the perpendicular component:
Let's verify: . They're perpendicular.
The purple vector is perpendicular to -- verified by the zero dot product. It was created by subtracting the projection (the component of along ) from . The gold right-angle marker confirms orthogonality. To finish, normalize both vectors to unit length.
That's the entire Gram-Schmidt algorithm in two dimensions. You take the first vector as-is, project the second onto it, subtract the projection, and you have two perpendicular vectors. Normalize them to unit length and you have an orthonormal basis.
In higher dimensions, the same idea extends naturally: for each new vector , subtract its projections onto all previously computed . What remains is the component perpendicular to all of them.
The formal bit
An orthogonal matrix is a square matrix whose columns are orthonormal vectors. This means:
which immediately gives us . The inverse is just the transpose -- no computation needed.
Key properties of orthogonal matrices:
- Preserves lengths: for all
- Preserves angles: the angle between and equals the angle between and
- Preserves dot products:
- Determinant is : for rotations, for reflections
The proofs are clean. For dot product preservation: . Length preservation follows because .
The Gram-Schmidt process takes a set of linearly independent vectors and produces an orthonormal set :
Each step subtracts the projections onto all previously computed directions, leaving only the component perpendicular to all of them.
Worked example: building a camera basis
Here's where orthogonal matrices earn their keep in practice. In 3D graphics, every camera needs an orthonormal basis -- three perpendicular unit vectors that define its local coordinate system: forward (where it looks), right (the horizontal axis of the image), and up (the vertical axis of the image). These three vectors form the columns of a rotation matrix that transforms from world coordinates to camera coordinates.
The problem: you're given a look-at direction and a rough world up vector, and you need to produce a clean orthonormal basis.
Suppose the camera is at position looking at a target . The look direction is:
Let's say this gives us -- the camera is looking down and to the right.
Now we need a right vector perpendicular to forward. We use the world's up direction as our starting point. But world_up isn't perpendicular to forward -- the camera is tilted. This is exactly the situation Gram-Schmidt was built for.
Step 1: Compute the right vector.
We need a direction perpendicular to forward in the horizontal plane. The cross product handles this:
Step 2: Compute the true up vector.
The true up must be perpendicular to both forward and right:
This is already unit length. Our orthonormal basis is:
You can verify: every pair has dot product zero, and every vector has length 1. The view matrix is the orthogonal matrix with these as rows (or columns, depending on convention):
Since is orthogonal, -- and you get the inverse view matrix for free. This is why game engines and graphics APIs use orthonormal bases everywhere: the math stays clean and the inversions stay cheap.
This same pattern shows up whenever you need a local coordinate frame: constructing tangent-space bases for normal mapping, building coordinate frames for physics simulations, or setting up reference frames for robot joints. Gram-Schmidt (or its cross-product shortcut in 3D) is how you get there.
Key Takeaway: Orthogonal matrices are pure rotations and reflections -- they preserve shape perfectly. Gram-Schmidt builds orthonormal bases from any linearly independent set. In practice, orthogonal matrices are everywhere: camera view matrices, QR decomposition, change-of-basis for physics and graphics. Their defining superpower is that -- the inverse is free.
What's next
We've built the theoretical foundation. Now let's put it all to work -- starting with the practical mechanics of 2D transformations in computer graphics.