Chapter 7

Inverse Matrices & Systems of Equations

If a transformation squishes (2,3)(2, 3) to (5,1)(5, 1), can you undo it? Can you figure out where (5,1)(5, 1) came from?

That question -- running a transformation backwards -- is the heart of this chapter. It connects matrix inverses to something you've done a thousand times in code: solving for unknowns. Every system of linear equations is secretly a question about undoing a transformation.

A transformation applied

Let's start with a concrete transformation described by the matrix:

A=[1111]A = \begin{bmatrix} 1 & 1 \\ -1 & 1 \end{bmatrix}

This takes ı^\hat{\imath} to (1,1)(1, -1) and ȷ^\hat{\jmath} to (1,1)(1, 1). It's a combination of rotation and scaling -- the grid tilts and stretches. Watch what happens to the vector (2,3)(2, 3):

A[23]=2[11]+3[11]=[51]A\begin{bmatrix} 2 \\ 3 \end{bmatrix} = 2\begin{bmatrix} 1 \\ -1 \end{bmatrix} + 3\begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 5 \\ 1 \end{bmatrix}

The blue arrow is the original vector (2,3)(2, 3). After the transformation, it lands at the orange point (5,1)(5, 1).

(2, 3) A (5, 1) 1 2 3 4 5 1 2 3

The transformation AA warps the grid and sends (2,3)(2, 3) to (5,1)(5, 1). The grid lines are still straight and evenly spaced -- the hallmark of a linear transformation.

The inverse undoing it

Now here's the key question: if someone hands you the output (5,1)(5, 1) and the matrix AA, can you find the original input?

You need a transformation that does the exact opposite -- one that unwarps the grid back to normal. That's the inverse matrix A1A^{-1}. It reverses everything perfectly:

A1=12[1111]=[0.50.50.50.5]A^{-1} = \frac{1}{2}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 0.5 & -0.5 \\ 0.5 & 0.5 \end{bmatrix}

Apply it to the output (5,1)(5, 1):

A1[51]=[0.50.50.50.5][51]=[23]A^{-1}\begin{bmatrix} 5 \\ 1 \end{bmatrix} = \begin{bmatrix} 0.5 & -0.5 \\ 0.5 & 0.5 \end{bmatrix}\begin{bmatrix} 5 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}

We're back to (2,3)(2, 3). The grid snaps back to its original form.

(5, 1) A⁻¹ (2, 3) 1 2 3 4 5 1 2 3

The inverse transformation reverses everything perfectly. The grid is restored to its original form, and (5,1)(5, 1) returns to (2,3)(2, 3).

The inverse matrix is the "undo" button for a linear transformation. If AA warps space, A1A^{-1} unwarps it. If AA rotates 45 degrees clockwise, A1A^{-1} rotates 45 degrees counterclockwise. Every step of the transformation is reversed.

When there is no inverse: det = 0

Not every transformation can be undone. Consider this matrix:

B=[1111]B = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}

Both columns are the same: (1,1)(1, 1). That means ı^\hat{\imath} and ȷ^\hat{\jmath} both land on the same line. The entire 2D plane gets squished down to a single line.

Look at what happens: multiple different input vectors all get mapped to the same output. The vectors (1,0)(1, 0), (0,1)(0, 1), and (2,1)(2, -1) all land at the point (1,1)(1, 1). If someone shows you the output (1,1)(1, 1) and asks "what was the input?" -- you can't answer. It could have been any of them. The information about which input you started with is gone.

(1, 0) (0, 1) (2, -1) (1, 1) (2, 2) det(B) = 0 All outputs land on one line 1 2 1 2

When space collapses, information is lost. There's no way back. Three different inputs all land at the same output -- you can't tell which one you started with.

This is the geometric meaning of det(A)=0\det(A) = 0. The transformation crushes a dimension. The plane becomes a line (or even a point). Once that happens, there's no inverse because the mapping isn't one-to-one anymore. You've lost a dimension of information, and no amount of cleverness can recover it.

Think of it like a hash function: many inputs map to the same output, so you can't reverse it. Except here, the reason is geometric -- space was physically squished flat.

Systems of equations as a transformation

Here's where inverse matrices become genuinely useful. Consider a system of equations:

{x+y=5x+y=1\begin{cases} x + y = 5 \\ -x + y = 1 \end{cases}

You can rewrite this as a matrix equation:

[1111]A[xy]x=[51]b\underbrace{\begin{bmatrix} 1 & 1 \\ -1 & 1 \end{bmatrix}}_{A} \underbrace{\begin{bmatrix} x \\ y \end{bmatrix}}_{\vec{x}} = \underbrace{\begin{bmatrix} 5 \\ 1 \end{bmatrix}}_{\vec{b}}

Read this geometrically: you know the transformation AA, and you know the output b=(5,1)\vec{b} = (5, 1). You're looking for the input x\vec{x} that lands there. That's exactly the question from the beginning of this chapter.

If AA is invertible, the answer is immediate: x=A1b\vec{x} = A^{-1}\vec{b}. Apply the inverse transformation to the output, and you get the input.

b = (5, 1) known output x = (2, 3) solution x = A⁻¹b Ax = b 1 2 3 4 5 1 2 3

Solving Ax=bA\vec{x} = \vec{b} means: given the transformation AA and the output b\vec{b} (orange), find the input x\vec{x} (blue). If AA is invertible, just apply A1A^{-1} to the output.

This reframing is powerful. Every system of linear equations is a question about a transformation: "what input produces this output?" And the answer, when it exists, is always the same: undo the transformation by applying the inverse.

The formal bit

Here are the key facts about inverse matrices, stated precisely.

The inverse undoes the transformation:

A1A=AA1=IA^{-1}A = AA^{-1} = I

where II is the identity matrix -- the "do nothing" transformation. Applying AA then A1A^{-1} (or vice versa) is the same as doing nothing at all.

For a 2x2 matrix, the inverse has an explicit formula:

A=[abcd]A1=1adbc[dbca]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \qquad A^{-1} = \frac{1}{ad - bc}\begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

That adbcad - bc in the denominator is the determinant. You swap the diagonal entries, negate the off-diagonal entries, and divide by the determinant. If the determinant is zero, you're dividing by zero -- no inverse exists.

Solving a linear system:

Ax=bx=A1bA\vec{x} = \vec{b} \qquad \Longrightarrow \qquad \vec{x} = A^{-1}\vec{b}

Multiply both sides on the left by A1A^{-1}. On the left side, A1A=IA^{-1}A = I, so you're left with x\vec{x}.

When does the inverse exist?

Invertible    det(A)0    nothing collapses\text{Invertible} \iff \det(A) \neq 0 \iff \text{nothing collapses}

If the determinant is zero, the transformation squishes space into a lower dimension. There's no way to unsquish it because multiple inputs map to the same output.

Worked example: solving a 2x2 system

Let's solve a concrete system from start to finish.

{2x+y=5x+3y=7\begin{cases} 2x + y = 5 \\ x + 3y = 7 \end{cases}

Step 1: Write it as a matrix equation.

[2113][xy]=[57]\begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 5 \\ 7 \end{bmatrix}

The matrix AA encodes the coefficients. The vector b\vec{b} holds the right-hand sides.

Step 2: Check the determinant.

det(A)=2311=5\det(A) = 2 \cdot 3 - 1 \cdot 1 = 5

Not zero, so the inverse exists. We can solve uniquely.

Step 3: Compute the inverse.

A1=15[3112]A^{-1} = \frac{1}{5}\begin{bmatrix} 3 & -1 \\ -1 & 2 \end{bmatrix}

Swap the diagonal (23)(2 \leftrightarrow 3), negate the off-diagonal (11)(1 \to -1), divide by det=5\det = 5.

Step 4: Multiply to get the solution.

x=A1b=15[3112][57]=15[1575+14]=15[89]=[1.61.8]\vec{x} = A^{-1}\vec{b} = \frac{1}{5}\begin{bmatrix} 3 & -1 \\ -1 & 2 \end{bmatrix}\begin{bmatrix} 5 \\ 7 \end{bmatrix} = \frac{1}{5}\begin{bmatrix} 15 - 7 \\ -5 + 14 \end{bmatrix} = \frac{1}{5}\begin{bmatrix} 8 \\ 9 \end{bmatrix} = \begin{bmatrix} 1.6 \\ 1.8 \end{bmatrix}

So x=1.6x = 1.6 and y=1.8y = 1.8. You can verify: 2(1.6)+1.8=52(1.6) + 1.8 = 5 and 1.6+3(1.8)=71.6 + 3(1.8) = 7.

The geometric picture: each equation is a line. The solution is where they intersect.

2x + y = 5 x + 3y = 7 (1.6, 1.8) 1 2 3 4 5 1 2 3 4 5

Two lines, one intersection. The solution (1.6,1.8)(1.6, 1.8) is the point where both equations are satisfied simultaneously. When det0\det \neq 0, the lines always cross at exactly one point.

When the determinant is zero, the two lines are parallel (no solution) or the same line (infinitely many solutions). Either way, there's no single unique answer -- which is exactly what "no inverse" means geometrically.

Key Takeaway: An inverse matrix undoes a transformation. It exists only when the determinant isn't zero -- you can't unsquish a collapsed space. Solving a system of equations Ax=bA\vec{x} = \vec{b} is the same as asking "what input does this transformation map to b\vec{b}?" and the answer is x=A1b\vec{x} = A^{-1}\vec{b}.

What's next

When a transformation does collapse space, it doesn't map onto all of the output. The part it does reach is the column space, and the dimension of that output is the rank. These ideas tell you exactly how much a transformation preserves and how much it destroys. That's Chapter 8: Column Space & Rank.