# # TF Implementing a VGG-19 network in TensorFlow 2.0

## # TF Implementing a VGG-19 network in TensorFlow 2.0

Highlights: In this post we will show how to implement a fundamental Convolutional Neural Network like $$VGG-19$$ in TensorFlow. The VGG-19 architecture was design by Visual Geometry Group, Department of Engineering Science, University of Oxford. It competed in the ImageNet Large Scale Visual Recognition Challenge in 2014.

Tutorial Overview:

### 1. Theory recapitulation

With ConvNets becoming more of a popular in the computer vision field, a number of attempts have been made to improve the original AlexNet architecture. One important aspect of ConvNet architecture design is it’s depth.

Remarkable thing about the $$VGG-19$$ is that instead of having so many hyper parameters it is a much simpler network. It has $$conv$$ layers that are just $$3\times3$$ filters with a stride of $$1$$, and with the same padding. In all $$Max\enspace pooling$$ layers $$2 \times 2$$ filters are used with a stride of $$2$$.

For classification, three fully connected layers are used, two with $$4096$$ neurons, and last one with $$1000$$ neurons.

In all layers except the last one, $$ReLU$$ activation function is used, while in the last one $$Softmax$$ is used for probability distribution between classes.

$$VGG-19$$ is trained on more than a million images from the ImageNet database. The network is 19 layers deep and can classify images into 1000 object categories.

Let’s see in details how this architecture looks like.

### 2. Implementation in TensorFlow

The interactive Colab notebook can be found at the following link

Training a network with $$140000000$$ parameters will take to long, so here we will just load weights from a pre-trained model. This is done by using the load_weights() function. Weights can be found here.