#003 TF 2.0 Eager Execution- A Pythonic way of using TensorFlow

#003 TF 2.0 Eager Execution- A Pythonic way of using TensorFlow

TensorFlow uses Eager execution, which is a more convenient way to execute the code, and also more “Pythonic”. It is a default choice in the latest version TensorFlow 2.0.

tensorflow eager execution
Eager Execution in Tensorflow 2.0

In TensorFlow 1.x, we first need to write a Python program that constructs a graph for our computation, the program then invokes Session.run(), which hands the graph off for execution to the C++ runtime. This type of programming is called declarative programming (specification of the computation is separated from the execution of it). So, Sessions provide one way to execute these compositions.

With eager execution, Python is the way to execute compositions. So, we don’t need to create a graph first and then use a session to execute it, but we execute operations immediately without a session object.

Lazy loading – the node objects are not created and initialized until they are needed.

Lazy Loading and Eager Execution are not entirely mutually exclusive when Eager Execution is enabled. This means when you have Eager Execution enabled, we can still go the Lazy loading route and build our graph before executing in a session. We can save the model generated with execution and later load it in the graph.

In the Eager mode, TensorFlow operations act so that they immediately evaluate and return their values to Python. In addition, we can also see tensor values at each line without running it in a “Session”.

So, we saw how we can inspect tensor values at each line without running it in a session. Now, let’s see how NumPy and TensorFlow interact.

NumPy interacts properly with Eager mode. All inputs passed to the TensorFlow operations are converted to tf.Tensor objects. Also, NumPy operations accept tf.Tensor arguments. Let’s apply this in code.

A placeholder is a variable that can be assign data to at a later date. The main idea of using placeholders is to create operations and build computation graphs, without needing the data. It is worth mentioning that placeholders are not supported in TensorFlow 2.x eager mode. If we try to create an instance of tf.placeholder we will get an error as in the following example. The only way to create them, is by running TensorFlow 1.x code. Let’s see an example.

The main advantage of the computational graph is that it can be easily used to automatically compute derivatives. The main component of any deep learning API is how gradients are handled. In the Eager mode, we shall use tf.GradientTape instead of tf.gradients.

To conclude, an eager execution provides a NumPy – like way for numerical computation. It is a flexible platform for machine learning research and experimentations.

In the next post we will talk about TensorFlow high-level APIs.

 

Leave a Reply

Your email address will not be published. Required fields are marked *