#003C Gradient Descent in Python

#003C Gradient Descent in Python

Gradient Descent in Python

We will first import libraries as NumPy, matplotlib, pyplot and derivative function.

Then with a NumPy function – linspace() we define our variable \(w \) domain between 1.0 and 5.0 and 100 points. Also we define alpha which will represent learning rate. Next, we will define our \(y \) ( in our case \(J(w) \)) and plot to see a convex function, we will use \((w-3)^2  \).

So we can see that we plotted our convex function as an example.

Now we will use our Gradient Descent algorithm.

First we define our function which will we use in algorithm.

Then, we will define \(dw \) as derivate of our function and use that to define \(J\) (Cost function) with \(w\), \( \alpha \) and \(dw\).

Finally, we will set our starting \(w \) position, \( w\_list \), \( fw\_list \) ( where we will append new values of \( w \) and our whole function after every iteartion in for loop) , and \( N \) as number of iterations in a for loop. In  for loop we will increase our \( w \) with a new value of derivative for our squared function. As we progress in loop, the derivative will be smaller and smaller. At the end we will print our value for \( w \) that will be very close to optimal value.

At the end we will just plot our \(J \) function with global optimum point to see the result of our Gradient descent algorithm.

In the next post we will learn why we need a computation graph.

More resources on the topic:

Leave a Reply

Your email address will not be published. Required fields are marked *