As we have observed, these manipulations of equations are similar to the row operations studied in Section 2. To make this more precise, we will translate systems of linear equations into augmented matrix form. These matrices are split arrays
which we already have used for inverting matrices in Section 3. They allow us to do elementary operations on two matrices simultaneously.
The purpose of this section is to convince you of the following fundamental correspondence:
Solving a system of equations by successive elimination of variables | is the same as | Row reduction of the associated augmented matrix. |
So we will be able to use the row reduction algorithm from Section 2 to solve systems of linear equations. We will make this correspondence explicit by looking again at Examples 5.1 and 5.1.
In Example 5.1 we had the system of linear equations
The above system is the same as saying the following two matrices are equal:
To translate this into augmented matrix form, we omit the variables and , and the equal sign, and write
Each column in the left-hand side contains the coefficients of a single variable, where the variables are listed in some fixed order (for example, or ), and each row corresponds to an equation.
Now let’s perform the same row operations from Example 5.1:
This is the reduced echelon form of the augmented matrix (see Definition 2.2.3). If we now translate back into the linear equations, this final augmented matrix corresponds to
Therefore we can conclude , and , is the solution to the original system of equations (See Theorem 5.2.4 below).
Let us now put Example 5.1 into augmented matrix form, and row reduce it. The augmented matrix form of these equations, with variables , is
Proceed with the row reduction as follows:
Using the techniques developed in Section 2, the reader can fill in the intermediate steps. Now we translate this final augmented matrix back into equation form, and find that and , which is the same answer obtained in Example 5.1.
An elementary operation on a system of linear equations is one of the following (See also Definition 2.1.3):
Multiplying one equation by a non-zero scalar. [Same as ]
Adding a multiple of one equation to another. [Same as ]
Swapping two equations. [Same as ]
Two systems of linear equations in the variables are equivalent if they have the same set of solutions.
Example 5.2.3.
Consider the system of the two linear equations (2) and (3) from Example 5.1. If we multiply either of these equations by a non-zero scalar, then clearly this doesn’t change the set of solutions. But if we multiply one of them by zero, then it would become the equation , which is always true, and therefore does not give a constraint. In other words, the set of solutions changes if we multiply one of these equations by zero.
Performing an elementary operation (see Definition 5.2.1) on a system of linear equations takes it into an equivalent system of linear equations.
If a we have a solution to a system of linear equations, and an elementary operation is performed, then it is clear that the same solution will work for the new system of equations. So the operations do not lose solutions. The theorem now follows because all of these operations can be “undone”, so they can’t gain solutions either. In particular, multiplying an equation by a non-zero multiple can be undone, and similarly, adding a multiple of one equation to another can be undone simply by subtracting.