# TF An implementation of a Shallow Neural Network in TensorFlow – Circles dataset

# TF An implementation of a Shallow Neural Network in TensorFlow – Circles dataset

In this post we will see in details how to make a shallow neural network in TensorFlow. Our first step will be to import all libraries that we will need in the following code.

Now, Let’s create the same dataset used in this post.

Then split the dataset into two part, a train and test set.

Now let’s make a visualization of what was generated.

With the following code we will make sure that \(X \) and \(y \) are of type float (it is very important when defining placeholders later on), and we want to avoid using rank-one arrays, so we will use reshape() function.

After reshaping let’s now check the dimensions of our divided dataset.

Before training our neural network some hyperparameters must be defined. Here, the number of epochs is equal to \(400 \). This means the total number of time we would perform the forward and backward feed would be \(400\) times. Also the learning rate is set to \(0.03 \).

In this architecture, we have 2 layers. One hidden layer which contains \(4\) units and an output layer that contains an only \(1\) unit. In this final unit, we get values between 0 and 1. Since we only want to know if each point belongs to either class 0 or 1, a threshold value must be set. Which means any value above 0.5 is considered as 1 and below is 0.

With the following two cells we will define two placeholders: data and target. We will use these two placeholders to store data for training and testing.

We will define the weights as TensorFlow variables, because those are parameters of our neural network that we need to learn. When defining dimensions of the weight matrices we actually define the structure of our neural network – number of layers and number of units per layer.

The same is with biases.

The following function computes the forward propagation of a neural network.

The function, shallowLayerNetwork returns prediction values after taking as inputs, data, weights and bias.

Here we defined the loss function.

As an optimizer, let’s choose Adam.

Here we will apply the \(sigmoid \) function to be the activation function in the output layer, since we want to convert these outputs values to real labels.

The following code provides the initialization of our variables in a session.

Finally we will train our network.

Now, let’s see how the cost function changes.

Here are the results.

Here are the values of the weights and bias learnt overtime.

We will see in more details what our neural network computes in each layer. To do that we will first define the activation functions that we will use.

Let’s start by calculating the outputs of each unit in our neural network.

Next, let’s make some plots of what each unit outputs.

In the next post we will see how to perform classification using a shallow neural network on a Circle dataset in Keras.

More resources on the topic:

Leave a Reply

Your email address will not be published. Required fields are marked *