#005 Linear Algebra – Inverse matrices, Rank, Column space and Null space

#005 Linear Algebra – Inverse matrices, Rank, Column space and Null space

Highlights: Hello and welcome back. In this post we will learn about some very important topics in linear algebra. They involve:

Tutorial Overview:

  1. Inverse matrices
  2. Rank
  3. Span
  4. Null space

1. Inverse matrices

The following concepts we will observe under the light of linear transformations. They will help us to gain intuition how we can solve linear equations and better understand some concepts related to linear transformations in general.

Usually, when you first hear about linear algebra, the first thing that pops into your mind are equations. In particular, solving equations of linear systems.

linear algebra Inverse matrices, linear equations

Above, we have an example of linear equations. The reason that we call them linear is because all variables that we need to find \(x \), \(y \) and \(z \) are actually multiplied with scalars. So, there are no any complex relation like \(\sin(x) \), \(x^{2} \) or \(y^{3} \).

In other words, if you observe these variables \(x, y \) and \(z \) they are only scaled by a scalar (a number) and they are just summed. We don’t allow any fancy mathematical operations for instance \(x\cdot y \) when we speak about linear equations.

Inverse matrices, linear equations

So, how we usually proceed with solving these equations is that we first have the variables and we align them on the left. Then, we also have, the so called, lingering constraint on the right and we will put them into another matrix.

Inverse matrices, linear equations coefficients variables matrices

One way to link the system of equations with matrices is that we can write the system of equations written on the left as a matrix-vector multiplication. It is shown here on the right where we have both the coefficients and variables. So, an \(\vec{x} \) can be regarded as an input vector, whereas a vector \(\vec{b} \) can be regarded as an output.

matrix-vector multiplication

Let’s have a look at the following example. Here, we have an example how a 2-D input vector can be transformed into \(\vec{b} \).

Our goal is to solve our system of equations in 2-D space where we are going to have our vector \(\vec{x} \) with the coordinates \(x \) and \(y \). We would like to see how this system can be transformed into this so-called interpretation where vector \(\vec{x} \) is mapped into \(\vec{v} \) using a matrix \(A \). So, in other words, we can say that \(\vec{x} \) is our input vector and it is transformed with a matrix \(A \). So, it moved somewhere in the 2-D space. Hence, we have obtained our resulting vector \(\vec{v} \). However, in this situation we don’t know where the \(x \) was, we just know the result, so actually we somehow have to go back from \(\vec{v} \) to \(x \) in order to reconstruct what the \(x \) was.

Here, we should recall that actually two things can happen. One is that we have a matrix \(A \) that actually transforms the 2-D plane into a line. Of course, that will happen when we have linearly dependent columns in \(A \).

2D plane transformation

As an alternative, we have an example where our linear transformation just a little bit modifies our unit square. This transformation gives us something that we commonly expect. Hence, will have to examine what of these two cases we have to observe our determinant of a matrix \(A \).

2D plane transformation linear algebra

Moreover, if the determinant of \(A \) is different from zero that means that we will probably have something that we call a unique solution.

So, imagine that we can start with something that’s more intuitive and that we actually have a regular linear transformation. In such case, our \(\vec{x} \) will be mapped to \(\vec{b} \). We can go back from \(\vec{b} \) to \(\vec{x} \) in order to search for \(\vec{x} \).

So, what does it mean to go backward? We can illustrate this for some simple transformation where for instance we have a counter clockwise \(90^{\circ} \) rotation. Then, our basis vector ended up so they are rotated in this way.

What an inverse transformation should do? Obviously, it should give us back our original basis vectors \(\hat{i} \) and \(\hat{j} \). That means that we will now rotate those vectors back in a clockwise direction for \(90^{\circ} \).

Now the question is what matrix will achieve this backward mapping? Such a transformation, we call an inverse transformation or an inverse matrix \(A \). This will be a matrix that will give us back original basis vectors by applying this transformation. This is illustrated in the following two images.

 transformation linear algebra
 transformation linear algebra
inverse transformation linear algebra

But, what is the most important is that if we apply consecutively \(A \) and then \(A^{-1} \) as a transformation we should go back to our original basis vectors.

Also, here we get one interesting concept and that’s the identity transformation. So, if we combine \(A \) and \(A^{-1} \), with the assumption that \(A \) determinant is not equal to zero, we will obtain the so-called identity matrix. It is defined in such a way that here we have \(\hat{i} \) vector and \(\hat{j} \) vector and they are \(\begin{bmatrix}1\\0\end{bmatrix} \) and \(\begin{bmatrix}0\\1\end{bmatrix} \).

Now, we can try to solve our equation if we multiply both the left and the right hand side of our equation with \(A^{-1} \). Then, this term \(A^{-1} \) times \(A \) matrix will be an identity matrix. This will give us a vector \(\vec{x} \) on the left hand side and on the right hand side we will have a matrix-vector multiplication. However, our new matrix will be an inverse matrix of \(A \).

On the other hand, when our determinant of a matrix \(A \) is equal to \(0 \) then any vector will be squashed into the line. This means that the vector mapped into this line, cannot determine its location of origin. That is, from the mapped vector in a 1-D, we cannot uniquely reconstruct the original vector in a 2-D plane.

It’s also interesting that in a 3-D, if the determinant is zero, then a 3-D vector will be mapped into a 2-D plane (or even a 1-D line). In this case we also cannot uniquely find an inverse transformation that will map vectors from 2-D plane into a 3-D vectors.

2. Rank

Here are some new definitions that can help us when we work with linear transformations and matrices. In essence, we can say that if a linear transformation lands on a line which is a 1-D we say that we have a rank-1 transformation. If we have a rank-2 transformation then any of ours 3-D vectors will land into a plane. Somehow we get an intuition that in these cases, these transformations are not fully complete.

 transformation linear algebra 3D
 transformation linear algebra 3D

So, one additional idea is that rank can correspond to the number of dimensions in the output. So, if we start with a 3-D vector and we end up on the plane every time, then our rank will be \(2 \), so the number of dimensions in our output vector is \(2 \).

In our 2-D coordinate system if we have a transformation matrix of rank \(2 \) that’s as good as we can get. Basically, our 2-D vector will be mapped into another 2-D vector and that somehow preserves our transformation completely.

If on the other hand we have a 3-D vector, this will be a rank \(3 \) vector because there are \(3 \) dimensions. If this is preserved we see that a determinant does not equal to zero.

3. Column Span

Another thing that we can define is that we would like to know for any input vector \(\vec{v} \) what is the so called “span”. In other words, what are the all possible solutions that one matrix \(A \) can provide or where our solution vectors can lie. In this case we call it a “Column space” of \(A \).

Another interpretation is that if we have a span of columns of two vectors (of size 2) that are linearly dependent than our column space will be just a line. So, we can only get an output vector that lies on this line.

4. Null space

In addition, we have another concept that’s called a Null space of a matrix \(A \). That will be a space that gives us all vectors for which the solution \(A\cdot \vec{b}= 0 \). One solution that will always be the case is a zero vector. This is due to the fact that linear transformations preserve the origin. So, \(\begin{bmatrix}0\\0\end{bmatrix} \) vector will always be mapped to \(\begin{bmatrix}0\\0\end{bmatrix} \). This means that it is in the Null space, and it is always a possible solution. However, depending on the rank we can have some different interpretations abut this.

null space linear algebra

For a full rank transformation only\(\begin{bmatrix}0\\0\end{bmatrix} \) will land on \(\begin{bmatrix}0\\0\end{bmatrix} \). Everything else is rotated or scaled, but then, only the center remains here as the point.

One interesting concept is that if we have a transformation that has columns which are dependent, then our 2-D space will be mapped completely into a single line.

And then we have the whole set of vectors that lie on this yellow line here which will be mapped into the \(\begin{bmatrix}0\\0\end{bmatrix} \) vector.

null space linear algebra kernel

In this case we can have a complete set of vectors, so not just one solution, but any vector that sits on this line will be mapped into a \(\begin{bmatrix}0\\0\end{bmatrix} \) vector. So in this case where we do not have a full rank matrix transformation than this line will completely go into a \(\begin{bmatrix}0\\0\end{bmatrix} \) and we have enough space that’s complete line and infinite number of solutions. So, a null space is a set of all vectors that are mapped into a \(\begin{bmatrix}0\\0\end{bmatrix} \) vector.

Summary

So to summarize we showed how actually we can solve linear equations and how we can use a property of the determinant. It helped us to gain insight where the linear transformation will map a corresponding vector. If it is going to be a transformation from a 2-D plane into another 2-D plane or to a 1-D vector line. Then, we can see what is a definition of a rank and how a rank can assist us to determine the properties of our solution. It will determine whether we will have a unique solution or, on the other hand, that we can have an infinite number of possible solutions. Also, we defined two terms as “column space” and “null space” that help us to identify spaces were are solutions can be.

In the next post, we will learn about a dot product of two vectors.