#010 How to align faces with OpenCV in Python
Highlight: In this post we are going to demonstrate how to apply face alignment using OpenCV and Python. Face alignment is one important step that we need to master before we start to work on some more complicated image processing tasks in Python. So, let’s see what face alignment is and why this method is necessary if we want to achieve higher accuracy in face recognition algorithms.
1.What is face alignment?
Face alignment can be realized as a process of transforming different sets of points from input images (input coordinate systems) into one coordinate system. We can call this coordinate system as the output coordinate system and define it as our stationary reference frame. Our goal is to warp and transform all input coordinates and align them with the output coordinates. For this purpose, we will apply three basic affine transformations: rotation, translation and scaling. In this way, we can transform facial landmarks from the input coordinate systems to the output coordinate system.
To perform face alignment, we can use several different methods. For this post, we decided to use a simple method that will focus only on areas around the eyes. This process consists of following steps:
- Detecting faces and eyes in the image
- Calculating the center of detected eyes
- Drawing a line between the center of two eyes
- Drawing the horizontal line between two eyes
- Calculating length of 3 edges of the triangle
- Calculating the angle
- Rotating image by calculated angle
- Scaling the image
In order to better understand this, let’s look at the code.
2. Face alignment with OpenCV
For face and eye detection,we are going to use OpenCV Haar cascade configurations (frontal face detection and eye detection modules). Before you start coding, be sure to download these two files from GitHub directory of Haar cascades, and load them into your python script.
# Necessary imports import cv2 import numpy as np from google.colab.patches import cv2_imshow
# Creating face_cascade and eye_cascade objects face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_default.xml") eye_cascade=cv2.CascadeClassifier("haarcascade_eye.xml")
As you can see, we created
eye_cascade objects with a name of the class
cv2.CascadeClassifier(). In these two objects we are going to store our detected faces and eyes.
# Loading the image img = cv2.imread('emily.jpg') cv2_imshow(img)
First, we need to convert our image to grayscale because cascades work only on grayscale images. Hence, we are going to detect faces and eyes in grayscale images, but we will draw rectangles on the corresponding color images. For extracting the coordinates of a rectangle that we are going to draw around our face, we need to create variable faces. Using
detectMultiScale()method, we will get a tuple of four elements, where: \(x \) and \(y \) depict the coordinates of a top left corner, and \(w \) and \(h \) are width and height of the rectangle. This method requires several arguments. First one is the grayscale image; second is the scaling factor that tells us how much size of the image is going to be reduced. Third and the last argument is the minimal number of neighbors.
# Converting the image into grayscale gray=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Creating variable faces faces= face_cascade.detectMultiScale (gray, 1.1, 4) # Defining and drawing the rectangle around the face for(x , y, w, h) in faces: cv2.rectangle(img, (x,y) ,(x+w, y+h), (0,255,0), 3) cv2_imshow(img)
Now that we got the rectangle, we are ready to move on the eye detection. For this purpose, we first need to create two regions of interest,which will be located inside the rectangle. Why we need two regions? Well, we need the first region for the grayscale image, where we are going to detect the eyes. The second region will be required for the color image, where we are going to draw the rectangles.
# Creating two regions of interest roi_gray=gray[y:(y+h), x:(x+w)] roi_color=img[y:(y+h), x:(x+w)]
Next, we will detect the eyes with a similar method as discussed above. Hence, we created a for loop to segment one eye from another. We stored coordinates of the first and second eye in
eye_1 variable and
eye_2 variables, respectively.
# Creating variable eyes eyes = eye_cascade.detectMultiScale(roi_gray, 1.1, 4) index=0 # Creating for loop in order to divide one eye from another for (ex , ey, ew, eh) in eyes: if index == 0: eye_1 = (ex, ey, ew, eh) elif index == 1: eye_2 = (ex, ey, ew, eh) # Drawing rectangles around the eyes cv2.rectangle(roi_color, (ex,ey) ,(ex+ew, ey+eh), (0,0,255), 3) index = index + 1 cv2_imshow(img)
In the following lines of the code, we are differentiating between
right_eye by looking at the figure that smaller eye will be our
if eye_1 < eye_2: left_eye = eye_1 right_eye = eye_2 else: left_eye = eye_2 right_eye = eye_1
Now, let’s draw a line between the center points of two eyes. But before we do that, we need to calculate coordinates of the central points of the rectangles. For better visualization, take a look at the following example.
Now, let’s implement these calculations in our code. Note that index 0 refers to \(x \) coordinate, index 1 refers to \(y \) coordinate, index 2 refers to rectangle width, and finally index 3 refers to rectangle height.
# Calculating coordinates of a central points of the rectangles left_eye_center = (int(left_eye + (left_eye / 2)), int(left_eye + (left_eye / 2))) left_eye_x = left_eye_center left_eye_y = left_eye_center right_eye_center = (int(right_eye + (right_eye/2)), int(right_eye + (right_eye/2))) right_eye_x = right_eye_center right_eye_y = right_eye_center cv2.circle(roi_color, left_eye_center, 5, (255, 0, 0) , -1) cv2.circle(roi_color, right_eye_center, 5, (255, 0, 0) , -1) cv2.line(roi_color,right_eye_center, left_eye_center,(0,200,200),3)
The next step will be to draw a horizontal line and calculate the angle between that line and the line that connects two central points of the eyes. Our goal is to rotate the image based on this angle. We can do that in the following way.
if left_eye_y > right_eye_y: A = (right_eye_x, left_eye_y) # Integer -1 indicates that the image will rotate in the clockwise direction direction = -1 else: A = (left_eye_x, right_eye_y) # Integer 1 indicates that image will rotate in the counter clockwise # direction direction = 1 cv2.circle(roi_color, A, 5, (255, 0, 0) , -1) cv2.line(roi_color,right_eye_center, left_eye_center,(0,200,200),3) cv2.line(roi_color,left_eye_center, A,(0,200,200),3) cv2.line(roi_color,right_eye_center, A,(0,200,200),3) cv2_imshow(img)
It is important to note that here, we specified in which direction our image will rotate. If \(y \) coordinate of the left eye is bigger than \(y \) coordinate of the right eye, we need to rotate our image in the clockwise direction. Otherwise, we would rotate our image in the counter-clockwise direction.
To calculate the angle, we first need to find the length of two legs of a right triangle. Then we can find the required angle using the following formula.
delta_x = right_eye_x - left_eye_x delta_y = right_eye_y - left_eye_y angle=np.arctan(delta_y/delta_x) angle = (angle * 180) / np.pi
It is important to note here that
np.arctan function returns angle in radian unit. In order to convert the result in degree, we need to multiply our angle \(\theta \) with 180 and then divide it by \(\pi \).
Now,we can finally rotate our image by angle \(\theta \).
# Width and height of the image h, w = img.shape[:2] # Calculating a center point of the image # Integer division "//"" ensures that we receive whole numbers center = (w // 2, h // 2) # Defining a matrix M and calling # cv2.getRotationMatrix2D method M = cv2.getRotationMatrix2D(center, (angle), 1.0) # Applying the rotation to our image using the # cv2.warpAffine method rotated = cv2.warpAffine(img, M, (w, h)) cv2_imshow(rotated)
As you can see in the above figure, we obtained excellent results for face and eyes detection. Now, we need to scale our image,for which we will use the distance between the eyes in this image as a reference frame. But, we first need to calculate this distance. We already calculated the length of two sides in the right triangle. So, we can use Pythagorean theorem to calculate the distance between the eyes as it represents the hypotenuse. We can do the same thing with all the other pictures that we are processing with this code. After that, we can calculate the ratio of these results and scale our images based on that ratio.
# calculate distance between the eyes in the first image dist_1 = np.sqrt((delta_x * delta_x) + (delta_y * delta_y))
# calculate distance between the eyes in the second image dist_2 = np.sqrt((delta_x_1 * delta_x_1) + (delta_y_1 * delta_y_1))
#calculate the ratio ratio = dist_1 / dist_2
# Defining the width and height h=476 w=488 # Defining aspect ratio of a resized image dim = (int(w * ratio), int(h * ratio)) # We have obtained a new image that we call resized3 resized = cv2.resize(rotated, dim) cv2_imshow(resized)
# Defining the width and height h=740 w=723 # Defining aspect ratio of a resized image dim = (int(w * ratio), int(h * ratio)) # We have obtained a new image that we call resized3 resized = cv2.resize(rotated, dim) cv2_imshow(resized)
In this post, we learned how to align faces with OpenCV in Python. Facial alignment is a crucial technique that we can use to improve the accuracy of face recognition algorithms. In the next post, we will explain how we can detect eye blinking in videos.
 Face Alignment for Face Recognition in Python within OpenCV by Sefik Ilkin Serengil
 Face Alignment with OpenCV and Python by Adrian Rosebrock