#004 TF 2.0 TensorFlow Wrappers
Highlights: In this post we are going to talk more about TensorFlow Wrappers. We are going to compare things before and after TensorFlow 2.0. This post will be the introductory one to the series of posts where we are going to build a wide variety of neural networks.
To use TensorFlow in our projects, we need to learn how to program using the TensorFlow API. TensorFlow has multiple APIs that can be used to interact with the library. The TF APIs or libraries are divided into two levels:
- Low-level library: The lower level library, also known as TensorFlow core, provides very fine-grained lower level functionality, thereby offering complete control on how to use and implement the library in the models.
- High-level libraries: These libraries provide high-level functionalities and are comparatively easier to learn and to implement the models using them. Some of the libraries include TF Estimators, TFLearn, TFSlim, Sonnet, and Keras.
Let’s first start with the old API’s. Note that the TFLearn and TF Slim are no longer supported in TensorFlow 2.0, so in order to use them, you must use the older versions.
TFLearn: Deep learning library featuring a higher-level API for TensorFlow
TFlearn is a modular and transparent deep learning library built on top of Tensorflow. It was designed to provide a higher-level API to TensorFlow in order to facilitate and speed-up experimentations while remaining fully transparent and compatible with it.
TFLearn features include:
- Easy-to-use and understand high-level API for implementing deep neural networks, with tutorial and examples.
- Fast prototyping through highly modular built-in neural network layers, regularizers, optimizers, metrics.
- Full transparency over Tensorflow. All functions are built over tensors and can be used independently of TFLearn.
- Powerful helper functions to train any TensorFlow graph, with the support of multiple inputs, outputs, and optimizers.
- Easy and beautiful graph visualization, with details about weights, gradients, activations and more…
- Effortless device placement for using multiple CPU/GPU.
TF-Slim is a library that makes defining, training and evaluating neural networks simple
Several widely used computer vision models (e.g., VGG, AlexNet) have been developed in slim, and are available to users. One big drawback to TF Learn, though, is the lack of easily integrated pre-trained models.
The TF Slim comes as part of the TensorFlow installation in the package:tf.contrib.slim.
The simple workflow in TF Slim is as follows:
- Create the model using slim layers.
- Provide the input to the layers to instantiate the model.
- Use the logits and labels to define the loss.
- Get the total loss using convenience functionget_total_loss().
- Create an optimizer.
- Create a training function using convenience functionslim.learning.create_train_op(), total_lossandoptimizer.
- Run the training using the convenience function slim.learning.train() and training function defined in the previous step
TF Estimator API has made significant enhancements to the original TF Learn package.
We can either use the pre-made Estimators or write our own Estimators. All Estimators, whether pre-made or custom, are classes based on the tf.estimator.Estimator class.
TF Estimator interface design is inspired by the popular machine learning library SciKitLearn, allowing to create the estimator object. Then it provides four main functions on any kind of estimator: fit(), evaluate(), predict(), export() .
The heart of every Estimator is its model function, which is a method that builds graphs for training, evaluation, and prediction. In case we are using a pre-made Estimator then we already have an implementation of this function and when relying on a custom Estimator, we must write the model function.
There are some other TensorFlow high-level APIs, such as TF PrettyTensor and TF Sonnet, but the most popular one is Keras. So now, we are going to completely switch to TensorFlow 2.0 and Keras in this and in the following posts.
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. From TensorFlow version 2.0, Keras is implemented in the main TensorFlow library. It let us to build and train model very fast, and also it support eager execution.
The key advantages of using tf.keras are:
- User-friendly – it provides us a very intuitive interface for build neural networks and to train them.
- Modular and composable – Keras models are made by connecting configurable building blocks together
- Easy to extend – we can easily create new layers, metrics and loss functions.
To sum it up, in this post we are talking about a few high-level API’s that let us to code and work faster. In the next post, we are going to show how to start building neural networks in TensorFlow with tf.keras.