## Introduction

This text is a quick post that should give a quick explanation on (overdetermined) systems of linear equations, particularly their matrix form. It is for an upcoming article that deals quite a lot with these systems.

**Warning:** Please note that I’m not a mathematician and as such this text may be riddled with errors and bullshit. There are far better explanations and tutorials floating about the Internet, or even better, take a university course in maths – it’s worth it! This is just a quick explanation so we can get on the same track on this topic.

## A linear equation

A simple linear equation is an equation where one side contains one unknown and the only way the unknown is involved is linear. The variable or unknown can be multiplied by a constant, but cannot be involved in any other way. As such the general form of a linear equation is:

\LARGE ax = b

where **a** and **b** are constants and **x** is the unknown.

The solution for any equation is when both the left hand side (in this case **ax**) is equal to the right hand side (in this case **b**). Any single linear equation will have exactly one solution. So for:

\LARGE 3x = 42

The constants are** a** = 3, **b** = 42. **x** can be easily calculated – **x** = 14. And that’s that.

## More linear equations – systems of linear equations

You can have sets of linear equations, where they describe some problem that involves multiple unknowns. An example of such a system follows:

2x_1 + 3x_2 + 4x_3 = 8

-8x_1 + 4x_3 = 6

6x_1 + 10x_2 + 9x_3= 1

2x_1 + 4x_2 - 6x_3 = 3

This system has four equations that describe three unknowns. As such it is overdetermined – there are more equations than there are unknowns.

It can be written down in another useful form – the matrix form. The matrix form is generally formatted as follows:

\LARGE \mathbf{Ax=b}

where **A** is the coefficients vector, **x** is the unknowns vector and **b** is the right hand side vector. **A** and **x** are matrix-multiplied to get **b**. The values for this particular set of equations is:

\mathbf{x}=[x_1, x_2, x_3]

\mathbf{A} = \begin{pmatrix} 2 & 3 & 4 \\ -8 & 0 & 4 \\ 6 & 10 & 9 \\ 2 & 4 & -6 \end{pmatrix}\mathbf{b}=\begin{bmatrix}8\\6\\1\\3\end{bmatrix}

Note that the individual columns of the **A** matrix correspond to the coefficients before the unknowns. The individual rows of both the **A** matrix and the **b** vector correspond to the individual equations. Note that x2 is not involved at all in the second equation. As such its’ coefficient will be equal to zero in the coefficients matrix.

So, a matrix form of a system of linear equations is just a nice way to express the equations.

## Solutions?

A system of linear equations can have

**Exactly one solution**– exactly one set of variables fits the equation – the constraints fit exactly one set of numbers**An infinite amount of solutions**– there is an infinite number of solutions – the constraints are loose enough that one or more variable can be of any value, enabling an infinite amount of solutions**No exact solution**– there is no set of numbers that fit all of the constraints at the same time

If you look at the system in the example you can calculate that there is no exact solution possible. However, there is an **approximate solution**. An approximate solution will exist for any system. It’s the vector that when fitted into **x** will be closest to fit the equations.

When dealing with noisy data it’s rare to get an exact solution. There will be one that’s a close approximation however. If you try to solve the system provided as an example manually, you will find that there’s no answer that fits all equations. For computer solving it helps if the individual equations are thought of as constraints for the variables. The solver system ‘looks’ at the individual constraints and tries to find a set of unknowns that fits the constraints best. This involves a lot of trying, many iterations and a lot of processing power.

By searching for the approximate answer we can find a value for the unknowns that, while not an exact fit, is the best we can get.

\mathbf{x}=[-0.747947 , 0.828064 , -0.027691]

Putting the now known **x** back into the system of equations gives us gives us a difference between the **b** ideal and the **b** that we get through our solution of **x**.

2*-0.747947 + 3*0.828064 + 4*-0.027691 = 0.87753

-8*-0.747947 + 0*0.828064 + 4*-0.027691 = 5.8728

6*-0.747947 + 10*0.828064 + 9*-0.027691= 3.5437

2*-0.747947 + 4*0.828064 - 6*-0.027691 = 1.9825

So the b value for this is

\mathbf{b}=\begin{bmatrix}0.87753\\5.8728\\3.5437\\1.9825\end{bmatrix}

Solving this kind of problems might not seem like much, but it *can* be used for a LOT of very interesting processing – deconvolution, signal enhancements, image processing… imagine that you could represent the pixels of a blurry image as the **b** value, the way you expect they are blurred in the **A** matrix and get the original image our of **x**? You can actually do that. Systems of linear equations have a LOT of applications in engineering.

All in all, I view the ability to formulate and solve systems of linear equations as an awesome mathematical tool. If you can formulate a problem into a system of linear equations, you are half way there.

If you want to play around with matrixes and math in general, try the awesome Octave – an open source math tool. Try this script in it, it should give you the same results as were on this page.

clc; clear; #Clear the screen #Declare the A matrix A = \ [2 3 4 -8 0 4 6 10 9 2 4 -6]; #Declare the b vector and rotate it b = [8 6 1 3]'; #Do the actual solving and store the result into x x = A\b; #Determine the difference between the ideal Error = (A * x) - b; #Print the values into the console A b x Error