#FA 002 Face Detection with OpenCV
Face detection represents the ability of a computer technology to locate peoples’s faces within digital images. Face detection applications employ algorithms focused on detecting human faces within larger images that also contain other objects such as landscapes, houses, cars and others.
Table of Contents:
- Import required packages
- Select the network
- Preprocess the image, standardise, mean subtraction
- Process the image with a Neural Network
- Analyze detections
The importance of face detection can be seen as:
- The first step for any automatic face recognition system
- The first step in many Human Computer Interaction systems
- Expression Recognition
- Emotional State Recognition
- In addition it is also the first step in the development of surveillance systems
- Tracking: Face is a highly non rigid object
To solve this problem scientists have developed several different approaches:
- Knowledge-based methods
- Feature invariant methods
- Template matching
- Appearance-based methods
In this post, we will show the last approach. That is, the appearance-based method and in particular we will use the Neural Networks. Although there are simpler methods, the use of Neural Networks in face detection is one of the most popular methods.
OpenCV (Open Source Computer Vision Library) is released under a BSD license and hence it’s free for both academic and commercial use. OpenCV was designed for computational efficient applications and has a strong focus on real-time applications. Moreover, if OpenCL is employed, it can take advantage of the hardware acceleration.
We will learn how to apply a face detection algorithm with OpenCV to single input images. In this post, we will use a pretrained Neural Network, based on a ResNet architecture.
In the next post will see how to implement this network from scratch and apply it in this example.
1. Import required packages
As the first step, let’s import the required packages. Prior to this, make sure that you have installed this packages. This can be done using either pip or conda command. Learn more.
Next, we should define some important parameters, such as path, which framework we will use, and a confidence value (certainty in our detection). The path will vary depending on which operating system you use, and where folder is located. We will use the simplest approach, thereby storing everything in one place.
2. Select the network
Now, it is important to load a pretrained neural network model. These models are part of OpenCV library and they are developed either in Caffe or TensorFlow. These models are trained to detect objects, that is, for this purpose faces. In the following posts, we will also explain how we can retrain such models and to detect different objects of interest. For now, we will use TensorFlow created and trained model for this example.
Here is a nice collection of models that we can use in OpenCV:
Here is the example of one model implemented in Keras that can be used for dace detection:
3. Preprocess the image, standardise, mean subtraction
Then we load the image , extract the dimensions, and create a blob by resizing to a fixed 500×500 pixels and then normalizing it.
The dnn.blobFromImage takes care of the pre-processing which includes setting the blob dimensions and normalization.
4. Process the image with a Neural Network
cv2::dnn::Net Class Reference This class allows to create and manipulate comprehensive artificial neural networks.
A Neural Network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs.
Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.
This class supports reference counting of its instances, i. e. copies point to the same instance.
5. Analyze detections
After we have had detected the faces in the images, we can loop over detections. In this loop, when the confidence value is greater than the value we have defined earlier, we accept that detection, and draw a rectangle around the face. What our neural network outputs, is the coordinates where the face was detected. These coordinates are the starting and end point of the rectangle.
The final remaining task is to plot the image, as well as the detected faces. This can be done either using cv2.imshow(), or by using plt.imshow(). The difference between these is in the way how the image was displayed. Using cv2 we get the new window opened, but with plt we can see output right here. Also, with using these coordinates, we can export just a face from the image.
The code above, shows how we can export a single face, but in the real world applications, there will be more faces in the photo. A Neural Network used in this example can detect multiple faces, but we need to determine their number, and save their coordinates. This is done by using a vertical stacking procedure for every detection of the face.
Great! We have explained how you can use neural networks and detect faces in your images. In the next post, you can find more about face detection in videos.
Back to top of page.