Category: Other

#020 CNN Data Augmentation

Data Augmentation Most computer vision tasks could use more data and data augmentation is one of the techniques that is often used to improve the performance of computer vision systems. The computer vision is a pretty complicated task. For an input image we have to figure out what is in that picture and we need to learn a decently complicated function to do that. In practice, having more data will help  for almost all computer…
Read more

#019 CNN Transfer Learning

Transfer Learning Last time we talked about Inception Network and some other Neural Network architectures. All of these Neural Network architectures are really big and hard to train and we have a problem. So if we are building a computer vision application rather than training a neural network from scratch we often make much faster progress if we download the network’s weights. In other words someone else has already trained the network architecture and we…
Read more

#018 CNN Inception Network – Inception Module

Inception Network In the previous post we’ve already seen all the basic building blocks of the Inception network. Here, we will see how to put these building blocks together and build it. An example of an Inception module To explain how Inception Network works we will consider a few steps: The third step may be (look at a red rectangle) just using a \(1\times 1 \) convolution, maybe with \(64\) filters so we get a $latex  28\times…
Read more

#016 CNN Network in Network – 1×1 Convolutions

Network in Network – 1×1 Convolutions In terms of designing \(ConvNet \) architectures one of the ideas that really helps is using a \(1\times 1 \) convolution. You might be wondering what does \(1\times 1 \) convolution do? Isn’t that just multiplying by a number? It seems like a funny thing to do. However, it turns out that it’s not quite like that. Let’s take a look! What does a \(1\times 1 \) convolution do?…
Read more

#015 CNN Why ResNets work ?

Why \(ResNets \) work? Why do \(ResNets \) work so well? Let’s go through one example that illustrates why \(ResNets \) work so well, at least in the sense of how we can make them deeper and deeper without really hurting our ability to get them to do well on the training set. Hopefully, doing well on the training set is usually a prerequisite to doing well on the test set. So, being able to…
Read more