## #006 Linear Algebra – Inner or Dot Product of two Vectors

*Highlight*: In this post we will review one of the fundamental operators in Linear Algebra. It is known as a Dot product or an Inner product of two vectors. Most of you are already familiar with this operator, and actually it’s quite easy to explain. And yet, we will give some additional insights as well as some basic info how to use it in Python.

Tutorial Overview:

### Dot product :: Definition and properties

First of all, when you apply the inner product to two vectors, they need to be of the same size.

For instance, we have two vectors or two ordered vector lists. We apply the dot product in such a way that we first multiply element-wise these two ordered vectors. Let’s have a look at the example. We multiply element by element: \(2\cdot 8 \), \(7\cdot 2 \), \(1\cdot 8 \).

Then, we sum those multiplication terms. It is interesting that the result of a dot product is a scalar.

First, as usual, let’s have a look at vectors in a 2D plane. They are easy for visualization and provide us a lot of intuition what a dot product is. So, we will observe two vectors \(\vec{v} \) and \(\vec{w} \). **A dot product between these two vectors can be interpreted as a projection of vector \(\vec{w} \) on vector \(\vec{v} \)**. **Then, we multiply the length of projected vector \(\vec{w} \) onto \(\vec{v} \) and the length of \(\vec{v} \).**

You can now ask: what if the projection is on the other side of \(\vec{v} \)? If the angle between the vectors is obtuse? Of course, this can happen, and then, the result is negative.

**So, depending on the angle between two vectors we can have the following scenarios.**

For instance, if the inner product is positive, then the angle between the two vectors is less than \(90^{\circ} \) (a sharp angle).

**If the vectors are perpendicular, then the inner product is zero.** This is an important property! For such vectors, we say that they are orthogonal.

In case that the vectors create an obtuse angle, the inner product will be negative.

There is one more trick with an inner product, so to say. Well, we can see that the inner product is a commutative vector operation. Basically, this means that we can project \(\vec{v} \) on \(\vec{w} \), in that case we will have a length of projected \(\vec{v} \) times a length of \(\vec{w} \), so we will obtain the same result.

**Let’s further explore the commutative property of an inner product. **

If \(\vec{v} \) and \(\vec{w} \) happen to have the same length we could exploit a symmetry. Well, we can see that the length of a projection of \(\vec{w} \) onto \(\vec{v} \) is the same as the length of \(\vec{v} \) projected onto \(\vec{w} \). In this way, it is clear that the inner product is the same for both calculation approaches.

Furthermore, we can assume that one of the vectors, let’s say \(\vec{v} \), is three times longer than \(\vec{w} \). Now, we see that we can’t have the projections of the same length. However, we can interpret this \(3\vec{v} \) as a simple scaling of the vector \(\vec{v} \).

Let us recall that scaling a vector with a scalar is actually scaling its length.

Therefore, we can observe the vector having the same size as \(\vec{w} \) scaled with a scalar. Now, we have a scalar times the vector \(\vec{v} \) and taking an inner product with \(\vec{w} \) will be the same as multiplying \(3 \) with \(\vec{v} \) and \(\vec{w} \). This illustrates that the inner product is indeed a commutative operation.

### Linear functions

Now, we will talk again about linear functions. But now, we will observe functions where input and output dimensions are not the same. For instance, we have a 2D input vector and using a function \(L \) it will give us a 1D output vector.

For a linear transformations the following properties do hold:

For instance, a line with evenly spaced dots on it, will be mapped to a 1D line. Here, note, that we are actually **mapping dots with coordinates ( x, y) to a single coordinate (e.g. on some line z)! What is important that the distance between dots on the mapped line will be equidistant.** This is the property of the Linear Transformation.

Now, let’s observe what happens if we apply a linear transformation [1 -2] on the \(\vec{w} \) vector. It is mapped so that its value is \(-2 \). This transformation maps a 2D space into a 1D space which is a line. And this transform \(\begin{bmatrix}1 & -2\end{bmatrix} \) shows how we map a single basis vector. So, this \(1 \) won’t change our \(\hat{i} \) vector (stays the same), but it will change our \(\hat{j} \)vector. \(\hat{j} \)vector will be mapped to -2. So, using this idea that any vector can be decomposed into a combination of basis vectors, we can obtain the following formula.

All of this can be clarified in this example. You can think of this: how 2-D points will be projected onto a single line.

We have a vector which goes from \(0 \) to \(\hat{u} \). Also, we have many 2-D points. We are interested where these 2-D points (vectors!) will be projected on a single line?

Let’s have a look where \(\hat{i} \) vector will land onto a unit vector \(\hat{u} \) that defines this line. If we use the so called line of symmetry, we will come to a conclusion that \(u_{x} \) will be a projected \(\hat{i} \). This is also the *x*– coordinate of a vector \(\hat{u} \).

The same does hold for the vector \(\hat{i} \). It will be projected into a vector that has a length \(u_{y} \). So, what will be a projection of an arbitrary vector that has two non-zero (*x*,*y*) coordinates.

We see that \(u_{x} \) and \(u_{y} \) define our projection matrix. They will tell us where our basis vector will land: \(u_{x} \) and \(u_{y} \)

In other words, if we represent our vector using basis vectors, we will get coordinates (*x*, *y*). **When we multiply these coordinates with \(u_{x} \) and \(u_{y} \) respectively, and sum these two products, we will get a position where our original (x, y) vector will land on a line defined with a vector \(\hat{u} \). This position, will be our new coordinate, for a 1-D coordinate system \(\hat{u} \). **

Once we define this transform, we are able to treat every vector as the decomposition of these two vectors, and therefore, we can obtain the result. Now, we see that the matrix vector products are dual with the dot product interpretation.

### Examples and implementation

A row times a column is fundamental to all matrix multiplications. From two vectors it produces a single number. This number is called the inner product of the two vectors. In other words, the product of a \(1 \) by \(n \) matrix (a row vector) and an \(n\times 1 \) matrix (a column vector) is a scalar.

To start, here are a few simple examples:

\(\vec{v}= \begin{bmatrix}1\\2\end{bmatrix} \), \(\vec{w}= \begin{bmatrix}4\\5\end{bmatrix} \)

The dot product is

$$ \vec{v}\cdot\vec{w} = 1\cdot 4+2\cdot 5= 4+10= 14 $$

Another example shows two vectors whose inner product is \(0 \).

\(\vec{v}= \begin{bmatrix}1\\3\\2\end{bmatrix} \), \(\vec{w}= \begin{bmatrix}4\\-4\\4\end{bmatrix} \)

$$ \vec{v}\cdot\vec{w} = 0 $$

$$ 1\cdot 4+3\cdot \left ( -4 \right )+2\cdot 4= 0 $$

Now, our vectors \(\vec{v} \) and \(\vec{w} \) are of size \(3 \).

On the other hand, we can calculate the length of a single vector using the dot product and we apply the square root. This is sometimes called a “norm” of a vector.

$$ \vec{v}= \begin{bmatrix}1\\3\\2\end{bmatrix} $$

$$ \vec{v}\cdot\vec{v}= 1+9+4= 14 $$

The length is \(\left \| \vec{v} \right \|= \sqrt{14} \)

This can be very useful when we need to normalize our vectors. In this case we want a vector to be of the length \(1 \). Basically, if we have a length of our vector equal to \(4 \) it is understandable that we need to divide it by \(4 \). In this case, where we have \(\sqrt{14} \) and we will just divide every element of our vector with \(\sqrt{14} \) and this will be our resulting vector:

$$ \vec{u}= \frac{\vec{v}}{\left \| \vec{v} \right \|}= \frac{1}{\sqrt{14}}\begin{bmatrix}1\\3\\2\end{bmatrix} $$

$$ \left \| \vec{u} \right \|= 1 $$

$$ \frac{1}{14}+\frac{9}{14}+\frac{4}{14} = 1 $$

We can also obtain the angle between two vectors with this approach. We take the dot product between vectors \(\vec{v} \) and \(\vec{w} \) and if we normalize this product we will obtain the angle between vectors \(\vec{v} \) and \(\vec{w} \).

$$ \cos \theta = \frac{\vec{v}\cdot \vec{w}}{\left \| \vec{v} \right \|\left \| \vec{w} \right \|} $$

For instance, the angle between our basis vectors \(\begin{bmatrix}1\\0\end{bmatrix} \) and \(\begin{bmatrix}1\\1\end{bmatrix} \) is:

$$ \cos \theta = \frac{1}{\left ( 1 \right )\left ( \sqrt{2} \right )} $$

$$ \theta = 45^{\circ} $$

This equation proves that actually all the absolute *cosine* values are less than one. So, we have the following form where the absolute value between the dot product is always less than the size of a vector \(\vec{v} \) multiplied by the vector \(\vec{w} \).

$$ \left | \cos \theta \right |\leq 1 $$

$$ \left | \vec{v}\cdot \vec{w} \right |\leq \left \| \vec{v} \right \|\left \| \vec{w} \right \| $$

One way to define a dot product is to write it in the following way:

$$ \vec{v}\cdot \vec{w}= v_{1}w_{1}+ v_{2}w_{2} $$

In addition, a definition of a unit vector is very interesting. For instance, vector \(\vec{v} \) with coordinates \(\left (1,1 \right ) \) we can normalize in a following way. We can divide \(\vec{v} \) with a length of \(\vec{v} \) and this will be our unit vector that has the length \(\frac{1}{\sqrt{2}} \).

On the other hand, we can see that vectors of length \(1 \) that start in the center of coordinate system define a unit circle. So, every unit vector has to be on this circle. Also, it is interesting that length is one, but another property is if we take a dot product with our basis vector \(\hat{i} \) and our unit vector \(\hat{u} \). This projection is a well known result from a trigonometry. We obtain \(\cos \theta \) on the \(x \) axis and \(\sin \theta \) on the \(y \) axis. Moreover, we can get our coordinates of a unit vector by putting \(\cos \theta \) for \(x \) component and \(\sin \theta \) for \(y \) component. In this way we can define an angle between two vectors.

**Summary**

Wow! Indeed, we provided a lot of ideas and concepts related to an inner or dot product of two vectors. We realize how much important linear algebra is. Also, very often we will have similar applications. For instance, we will use a normalization of our data using the ideas of a dot product. This is just to name one, out of a vast number of dot product applications. * Sit tight, and in the next post we continue our journey and talk more about linear transformations*.