#008 TF 2.0 An implementation of a Shallow Neural Network in tf.keras – digits dataset

#008 TF 2.0 An implementation of a Shallow Neural Network in tf.keras – digits dataset

In this post we will see how we can classify handwritten digits using shallow neural network implemented with tf.keras.

Table of Contents:

  1. Load the digit dataset
  2. Implementing a Neural Network
  3. Visualization and Testing

1. Load the digits dataset

First, let us import all necessary libraries.

After imports, we can use imported module to load digits data. The load_digits() function will just download data and we need to split it into train and test sets.

We can also plot some digits to see how they look.

Many machine learning algorithms cannot analyze categorical data directly. That is, neurons usually output either 0 or 1. Hence, if we have a digit class going from “0” to “9” we will use 10 binary output neurons. This is known as a one hot encoding. [1]. Hence, if the output should be digit 5, the 6th neuron should output 1, and all the remaining should be zeros. Note, that the first neuron is active for a “zero” digit.

2. Implementing a Neural Network

When all data is loaded and prepared, it is time to create a model. We will use a simple Sequential API in order to do this. Our model will have 2 layers, with 64(height x width) neurons in the input layer, 64 in the hidden layer and 10 neurons in the output layer.
We will use normal initializer that generates tensors with a normal distribution.

The optimizer we’ll use is Adam. It is an optimization algorithm that can be used instead of the classical stochastic gradient descent procedure to update network weights. Adam is a popular algorithm in the field of deep learning because it achieves good results.

To make this work, we need to compile a model. An important choice to make is the loss function. We use the categorical_crossentropy loss because it measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). In other words, this loss function is used to solve a multi-class classification problem.


3. Visualization and Testing

Let’s now visualize the outputs of our neural network.

Summary

Our shallow 2-layers Neural Network achieved quite high classification accuracy of 97%. In the next post we will learn how to perform a classification with a convolutional Neural Network on the MNIST Dataset using TensorFlow 2.0.


More resources on the topic:

 

Leave a Reply

Your email address will not be published. Required fields are marked *