datahacker.rs@gmail.com

Category: Deep Learning

#003 Advanced Computer Vision – Multi-Task Cascaded Convolutional Networks

Highlights: Face detection and alignment are correlated problems. Change in various poses, illuminations, and occlusions in unrestrained environments can make these problems even more challenging. In this tutorial, we will study how deep learning approaches can be great performing solutions for these two problems. We will study a deep cascaded multi-task framework proposed by Kaipeng Zhang [1] et al. that predicts face and landmark location in a coarse-to-fine manner. So let’s get started! Tutorial Overview:…
Read more

#002 Advanced Computer Vision – Motion Estimation With Optical Flow

Highlights: Techniques like Object Detection have enabled computers of today to detect object instances easily. However, tracking the motion of objects such as vehicles across all frames of a video, estimating their velocity, and predicting their motion requires an efficient method such as Optical Flow. In our previous posts, we provided a detailed explanation about two of the most common Optical Flow methods – the Lucas Kanade method and the Horn & Schunck method. In…
Read more

# 005 RNN – Tackling Vanishing Gradients with GRU and LSTM

Highlights: Recurrent Neural Networks (RNN) are sequence models that are a modern, more advanced alternative to traditional Neural Networks. Right from Speech Recognition to Natural Language Processing to Music Generation, RNNs have continued to play a transformative role in handling sequential datasets. In this blog post, we will carry forward our knowledge from building Language Models to addressing issues that arise from those language models, such as Long-term Dependencies. These can also be termed as…
Read more

#004 RNN – Language Modelling and Sampling Novel Sequences

Highlights: Recurrent Neural Networks (RNN) are sequence models that are a modern, more advanced alternative to traditional Neural Networks. Right from Speech Recognition to Natural Language Processing to Music Generation, RNNs have continued to play a transformative role in handling sequential datasets. In this blog post, we will learn how RNNs are used to build and train Language Models, and thereby, sample novel sequences from our trained models. By the end of the post, you…
Read more

#003 RNN – Architectural Types of Different Recurrent Neural Networks

Highlights: Recurrent Neural Networks (RNN) are sequence models that are a modern, more advanced alternative to traditional Neural Networks. Right from Speech Recognition to Natural Language Processing to Music Generation, RNNs have continued to play a transformative role in handling sequential datasets. In this blog post, we will explore the various models and architectural categories of Recurrent Neural Networks. We will learn from real-life cases how these different RNN categories solve day to day problems…
Read more