#003 D TF Gradient Descent in TensorFlow

#003 D TF Gradient Descent in TensorFlow

In this post we will see how to implement Gradient Descent using TensorFlow.

Next, we will define our variable \(\omega \) and we will initialize it with \(-3 \). With the following peace of code we will also define our cost function \(J(\omega) = (\omega – 3)^2 \).

With the next two lines of code, we specify the initialization of our variables (here we have just one variable \(\omega \) and the gradient descent for minimizing our cost function with the learning rate of \(0.01 \).

Then we will define a session as sess and we will run the init so we will initialize the variable \(\omega \). After running, init = tf.global_variables_initializer() in a session variable will hold the values we told them to hold when we declared them.

Here we will print our variable \(\omega \). To do that we need to run a session sess once again. We can see that we got \(-3.0 \) as we have specified previously.

Then we will run a single step of gradient descent and print the result. We can see that the value of \(\omega \) moved a little bit towards the minimum of our cost function. We will also close this session here.

We can see that, to get the minimum of the cost function, we need to repeat previous steps multiple times. This time we will run the tf.Session() within this cell. After every iteration of gradient descent, we will save a value of \(\omega \) in a list.

Next, we will calculate the values of the cost function for the values of \(\omega \) in which we calculated the gradients.

With the following two lines of code, we will define our cost function in TensorFlow. First, we will define a placeholder xx_placeholder where we will feed values of \(\omega \) and then we will define an operation to calculate the cost function.

Then we will run a session to calculate a cost function. Notice that we had to reshape the lin_space because we specified the dimension od placeholder xx_placeholder to be (1,500). If we have not done this we would get an error.

We can see that we get two NumPy arrays of shape (1, 500).

Of course, we can easily define a cost function on the range of \(\omega \) values in NumPy but the purpose of the code above was to see how we can do it using TensorFlow.

Now, we will plot the cost function \(J(\omega) \) and we will also see how we are getting closer to the minimum of a cost function.

In the next post we will see how to implement a Logistic Regression in TensorFlow.

More resources on the topic:

Leave a Reply

Your email address will not be published. Required fields are marked *