datahacker.rs@gmail.com

# Tag: Neural Network

### TF TensorBoard: Visualizing Learning

Highlights: In this post we will learn what is TensorBoard and how to use it. For most people, neural networks can sometimes be a bit of black box. Debugging problems is also a lot easier when we can see what the problem is. Thankfully, TensorBoard is a tool that will help us to analyze neural networks and to visualize learning. Tutorial Overview: Sequential API Model Subclassing Intro The idea of TensorBoard is to help to…

### # K An implementation of a Shallow Neural Network in Keras – MNIST dataset

In this post we will see how we can classify handwritten digits using shallow neural network implemented in Keras. Our model will have 2 layers, with 64(height x width) neurons in the input layer and 10 neurons in the output layer.We will use normal initializer that generates tensors with a normal distribution. The optimizer we’ll use is Adam .It is an optimization algorithm that can be used instead of the classical stochastic gradient descent procedure…

### #032 CNN Triplet Loss

In the last post, we talked about Siamese Network, but we didn’t talk how to actually define an objective function to make our neural network learn. So, in order to do that, here we will define Triplet Loss. Triplet Loss One way to learn the parameters of the neural network, which gives us a good encoding for our pictures of faces, is to define and apply gradient descent on the Triplet loss function. Let’s see…
Siamese Network The job of the function $$d$$, which we presented in the previous post, is to use two faces and to tell us how similar or how different they are. A good way to accomplish this is to use a Siamese network. We get used to see pictures of $$convnets$$, like these two networks in the picture below. We have an input image, denoted with $$x^{(1)}$$, and through a sequence of $$Convolutional$$,… 