When we are programming Logistic Regression or Neural Networks we should avoid explicit \(for \) loops. It’s not always possible, but when we can, we should use built-in functions or find some other ways to compute it. Vectorizing the implementation of Logistic Regression makes the code highly efficient. In this post we will see how we can use this technique to compute gradient descent without using even a single \(for \) loop.
Now, we will examine the forward propagation step of logistic regression. If we have \(m\) training examples, to make a prediction on the first example we need to compute \(z \) and the activation function \(a\) as follows:
\(z^{(1)}= \omega^T x^{(1)} + b \)
\(a^{(1)} = \sigma(z^{(1)}) \)
To make prediction on the second training example we need to compute this :
\(z^{(2)}= \omega^T x^{(2)} + b \)
\(a^{(2)} = \sigma(z^{(2)}) \)
The same is with prediction of third training example:
\(z^{(3)}= \omega^T x^{(3)} + b \)
\(a^{(3)} = \sigma(z^{(3)}) \)
So if we have \(m\) training examples we need to do these calculations \(m\) times. In order to carry out the forward propagation step, which means to compute these predictions for all \(m\) training examples, there is a way to do this without needing an explicit for loop.
We will stack all training examples horizontally in a matrix \(\textbf{X}\), so that every column in matrix \(\textbf{X} \) represents one training example:
Notice that matrix \(\omega \) is a \(n_{x} \times 1\) matrix (or a column vector), so when we transpose it we get \(\omega^T \) which is a \(1 \times n_{x}\) matrix (or a row vector) so multiplying \( \omega^T \) with \(\textbf{X} \) we get a \(1 \times m\) matrix. Then we add a \(1 \times m\) matrix \(b \) to obtain \(\textbf{Z}\).
We will define matrix \(\textbf{Z} \) by placing all \(z^{(i)} \) values in a row vector :
In Python, we can easily implement the calculation of a matrix \(\textbf{Z} \):
$$ \textbf{Z} = np.dot(\omega^T, \textbf{X}) + b $$
As we can see \(b \) is defined as a scalar. When you add this vector to this real number, Python automatically takes this real number \(b \) and expands it out to the \(1 \times m\) row vector. This operation is called broadcasting, and more about it we will see at the end of this we will see at the end of this post.
Matrix \(\textbf{A} \) is defined as a \(1 \times m\), wich we also got by stacking horizontaly values \(a^{(i)}\) as we did with matrix $latex \textbf{Z} $:
In Python, we can also calculate matrix \(\textbf{A} \) with one line of code as follows (if we have defined sigmoid function as above) :
\(\textbf{A} = sigmoid(\textbf{Z}) \)
$$ $$
Vectorization of Logistic Regression
$$ $$
In the previous post we saw that for the gradient computation we had to compute detivative \(dz \) for every training example:
\(dz^{(1)} = a^{(1)} – y^{(1)} \)
\(dz^{(2)} = a^{(2)} – y^{(2)} \)
\(\vdots \)
\(dz^{(m)} = a^{(m)} – y^{(m)} \)
In the same way, we have defined previous variables, now we will define matrix \(\textbf{dZ} \), where we will stack all \(dz^{(i)} \) variables horizontally, dimension of this matrix \(\textbf{dZ} \) is \(1\times m\) or alternatively a \(m \) dimensional row vector .
After leaving the inner \(for \) loop, we have divided \(J\), \(\mathrm{d} w_{1}\), \(\mathrm{d} w_{1}\) and \(b\) by \(m\), because we computed their averages
This code was non-vectorized and highly inefficent so we need to transform it. First, using vectorization, we can transform equations \((*) \) and \((**) \) into one equation:
\(dw += x^{(i)}dz^{(i)} \)
Remember that in this case we have two features, \( x_1 \) and \(x_2 \). If we had had more features, for example n features, we would have needed another for loop to calculate \( dw_{1} \) … \(dw_{n} \) .
The cost function is : $$ J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)}) $$
We don’t need to loop through entire training set, but still we need to loop through number of iterations and that’s a \(for \) loop that we can’t get rid off.
This post completes the Logistic regression. It can be seen as a one neuron neural network. Let’s see why, what and how about neural networks!