datahacker.rs@gmail.com

Category: Other

#005B Logistic Regression: from scratch vs. Scikit-Learn

Logistic Regression: from scratch vs. Scikit-Learn Let’s now compare Logistic Regression from scratch and Logistic Regression from scikit – learn. Our dataset are class 0 and class 1, which we generated randomly. Training set has 2000 examples coming from the first and second class. The test set has 1000 examples, 500 from each class.  When we plot these datasets it looks like this: Python’s library scikit-learn has  function LogisticRegression and we will implement it on…
Read more

#006B Vectorization and Broadcasting in Python

What is a Vectorization ? A vectorization is basically the art of getting rid of explicit for loops whenever possible. With the help of vectorization operations are applied to whole arrays instead of individual elements. The rule of thumb to remember is to avoid using explicit for loops in your code. Deep learning algorithms tend to shine when trained on large datasets, so it’s important that your code runs quickly. Otherwise, your code might take…
Read more

#012A Building a Deep Neural Network

Building blocks of a Deep Neural Network   In this post we will see what are the building blocks of a Deep Neural Network. We will pick one layer, for example layer \(l \) of a deep neural network and we will focus on computatons for that layer. For layer \(l \) we have parameters \(\textbf{W}^{[l]} \) and . Calculation of the forward pass for layer we get as we input activations from the previous…
Read more

#010A Gradient Descent for Neural Networks

  In this post we will see how to implement gradient descent for one hidden layer Neural Network as presented in the picture below. One hidden layer Neural Network   Parameters for one hidden layer Neural Network are \(\textbf{W}^{[1]} \), \(b^{[1]} \), \(\textbf{W}^{[2]} \) and \(b^{[2]} \). Number of unitis in each layer are:  input of a Neural Network is feature vector ,so the length of “zero” layer \(a^{[0]} \) is the size of an input feature…
Read more

#009 Activation functions and their derivatives

Activation functions When we build a neural network, one of the choices we have to make is what activation functions to use in the hidden layers as well as at the output unit of the  Neural Network. So far, we’ve just been using the sigmoid activation function but sometimes other choices can work much better.  Let’s take a look at some of the  options. \(sigmoid \) activation function In the forward propagation steps for Neural Network there are two steps…
Read more