#006 OpenCV projects – How to detect contours and match shapes in an image

#006 OpenCV projects – How to detect contours and match shapes in an image

Highlights: In this post, we will learn how to detect contours in images. First, we will remind ourselves how to detect lines and circles using the Hough transform. Then, we will move on by showing how to detect more complex shapes like contours. Moreover, we will learn how to analyze these shapes that will help us to better understand the content of the picture. Let’s begin with our post.

Tutorial overview:

  1. What are contours in an image?
  2. Detecting lines and circles using Hough transform
  3. Detecting contours

1. What are contours in an image?

Have you ever wondered how the human brain is so good at recognizing patterns and shapes? Recognizing simple shapes like circles, rectangles, or triangles seems so easy and natural to us. To better understand this human ability let’s take a look at the following image.

OpenCV shapes

You probably recognized in a split of a second what is in this picture. It is a human hand showing numbers from one to five. Now, let’s imagine that these objects are projected to a wall and that we have their shadows.

OpenCV shapes

Now, we can see the silhouette of the same objects. Although the image has been altered, we can still recognize what is on it. In the process, we lost colors but we keep the most important part of an image – edges, and contours. Therefore, we can conclude from this example that sometimes simplified objects like contours (shapes) can help us to recognize the content of an image.

So, what exactly is an edge? If we look carefully at the image, we will notice that the edge represents a change in pixel intensity.

Contours can be seen as a curved line joining all the continuous points along the edge. In the computer vision field, contours can be a very useful tool for shape analysis and object detection and recognition. We can use various algorithms to analyze the contours of a lot of shapes that we are dealing with in the real world.

How can we detect shapes?

So, to detect shapes we first need to analyze and understand the contours of that shape. The easiest way to do that is to use binary images (the object that we need to detect should be white and the background should be black). Hence, to detect contours we need to apply threshold or Canny edge detection. Let’s take a look at the following image.

Here, in the image above we can see several different shapes. Let’s say that we want to identify the shape of the square. How can we do that? One way to do that is to apply a method called template matching (brute search method). We take another image of that object as a reference and try to identify that same shape in our original image. The idea of this method is to find a correlation between the object in the reference image and the object in the original image. We will move the reference image across the input image and in each position, we will calculate an inner product. When the reference image overlaps the corresponding object in the input image we will get a large matching percentage and we will be able to detect the object.

However, this method will not work if two objects have different sizes. That’s why we need to choose a different approach. Instead of trying to find an identical match, we need to compare the corresponding properties between the original and the reference shape.

There are several functions in OpenCV we can use for this purpose. In our code, we will use the Hough Transform line detection method.

2. Detecting lines and circles using Hough transform

Let’s first remind ourselves what the Hough transform method is. It is a feature extraction technique used in image processing for detecting simple shapes such as circles and lines. If you want to learn the theory and math behind this method you can visit the following post.

Now, let’s see how we can implement the Hough transform method in our code to detect basic lines and circles.

import numpy as np
import matplotlib.pyplot as plt
import cv2
from google.colab.patches import cv2_imshow
# Draw the lines represented in the hough accumulator on the original image
def drawhoughLinesOnImage(image, houghLines):
    for line in houghLines:
        for rho,theta in line:
            a = np.cos(theta)
            b = np.sin(theta)
            x0 = a*rho
            y0 = b*rho
            x1 = int(x0 + 1000*(-b))
            y1 = int(y0 + 1000*(a))
            x2 = int(x0 - 1000*(-b))
            y2 = int(y0 - 1000*(a))
            
            cv2.line(image,(x1,y1),(x2,y2),(0,255,0), 2)   

def draw_circles(img, circles):
  for co, i in enumerate(circles[0, :], start=1):
    cv2.circle(img, (i[0],i[1]), i[2], (255, 0,255), 3)

# Different weights are added to the image to give a feeling of blending
def blend_images(image, final_image, alpha=0.7, beta=1., gamma=0.):
    return cv2.addWeighted(final_image, alpha, image, beta,gamma)
image = cv2.imread("Tic_tac_toe.png") # load image in grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
blurredImage = cv2.GaussianBlur(gray_image, (5, 5), 0)
edgeImage = cv2.Canny(blurredImage, 50, 120)

# Detect points that form a line
dis_reso = 1 # Distance resolution in pixels of the Hough grid
theta = np.pi /180 # Angular resolution in radians of the Hough grid
threshold = 170# minimum no of votes

houghLines = cv2.HoughLines(edgeImage, dis_reso, theta, threshold)
circles = cv2.HoughCircles(blurredImage, method=cv2.HOUGH_GRADIENT, dp=0.7, minDist=12, param1=70, param2=80)

houghImage = np.zeros_like(image) # create and empty image

drawhoughLinesOnImage(houghImage, houghLines) # draw the lines on the empty image
draw_circles(houghImage, circles)

orginalImageWithHough = blend_images(houghImage,image) # add two images together, using image blending

Full code for detecting lines and circles with Hough transform you can find in our GitHub repository.

3. Detecting contours

Now, let’s continue and see how to detect more complex shapes like contours in our image. First, let’s import the necessary libraries and load the input image.

import numpy as np
import matplotlib.pyplot as plt
import cv2
from google.colab.patches import cv2_imshow
img = cv2.imread("Signs.jpg")
cv2_imshow(img)

An important step is to create a copy of our image img_contour. We need to do that because this method may modify the original image in the process, so we need to create a clone of that image if we plan to use it later. We will later use this clone as an argument for the function cv2.drawContours() which will be used to draw the contours.

The next step is to blur the image using cv2.GaussianBlur function and to convert it into grayscale. Then, we will use this grayscale image to obtain a binary image.

img_contour = img.copy()

img_blur = cv2.GaussianBlur(img, (7, 7), 1)
img_gray = cv2.cvtColor(img_blur, cv2.COLOR_BGR2GRAY)

As we already mentioned, to analyze and understand the contours of shape we use a binary image as an input. To create a binary image, we are going to use the Canny edge detector. The function cv2.Canny() consists of three parameters. The first parameter is our gray image and the second and third parameters are minVal and maxVal. More detailed explanation about the Canny edge detector you can find if you click on this link.

img_canny = cv2.Canny(img_gray, 200, 400)
cv2_imshow(img_canny)

Furthermore, we need to dilate the image in order to emphasize the edges. For that, we will use the function cv2.dilate(). More information about how to apply dilation on an image you can find in our post Morphological transformations with OpenCV in Python

kernel = np.ones((3))
img_dilated = cv2.dilate(img_canny, kernel, iterations=1)

Now, lets take a look at our binary image.

cv2_imshow(img_dilated)

We will continue by organizing our code within the function get_contours(). This function consists of two input parameters: source image and clone image that we created before.

Then, we are ready to extract the contours. OpenCV provides us several methods for that. In our code, we will use the function cv2.findContours(). This method requires three parameters. The first is the source image. The second argument is Contour Retrieval Mode which is used to determine the hierarchy between contours. To better understand this hierarchy we need to imagine a case where some shapes are located inside other shapes. In such a case, we call the outer shape as a parent and the inner shape as a child. To determine the type of hierarchy between contours we can pass four different flags.

  • RETR_LIST – It retrieves all the contours, but doesn’t create any parent-child relationship (they all belongs to the same hierarchy level)
  • RETR_EXTERNAL -It returns only extreme outer flags (parent contours). All child contours are left behind
  • RETR_CCOMP – Retrieves all the contours and arranges them to a 2-level hierarchy. External contours are placed in hierarchy 1, and the contours of holes inside the object are placed in hierarchy 2.
  • RETR_TREE – It retrieves all the contours and creates a full family hierarchy list.

The third argument is the contour approximation method. Using this argument we can specify how many contour points we want to store. If we pass the parameter cv2.CHAIN_APPROX_NONE all contour points are stored. On the other hand, a parameter cv2.CHAIN_APPROX_SIMPLE is used to compress the contours to save space.

def get_contours(img, img_contour):
  contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)

  for cnt in contours:
    area = cv2.contourArea(cnt)
    if area > 9000:
      cv2.drawContours(img_contour, cnt, -1, (255, 0, 255), 1)

      # Find length of contours
      param = cv2.arcLength(cnt, True)

      # Approximate what type of shape this is
      approx = cv2.approxPolyDP(cnt, 0.01 * param, True)
      shape, x, y, w, h = find_shape(approx)
      cv2.putText(img_contour, shape, (x+78, y+200), cv2.FONT_HERSHEY_COMPLEX, .7, (255, 0, 255), 1)

  return approx, param, img_contour, contours, cnt

To better understand what contour is lets create black image and draw contour on it. First we will extract the contours from the following map of Africa.

img = cv2.imread("map.png")
cv2_imshow(img)
img_contour = img.copy()

img_blur = cv2.GaussianBlur(img, (7, 7), 1)
img_gray = cv2.cvtColor(img_blur, cv2.COLOR_BGR2GRAY)

img_canny = cv2.Canny(img_gray, 50, 190)

kernel = np.ones((2))
img_dilated = cv2.dilate(img_canny, kernel, iterations=1)

With the function get_contours()we will detect contours. Also, we can print coordinates of the contour points.

def get_contours(img, img_contour):
  contours, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

  cnt = contours[0]
  print("Contour: ")
  print(cnt)
  cv2.drawContours(img_contour, cnt, -1, (255, 0, 255), 2)
Contour: 
[[[408  20]]

 [[407  21]]

 [[404  21]]

 ...

 [[419  21]]

 [[415  21]]

 [[414  20]]]

Here, we will create black image with same dimensions as the original image and then we will draw contours on it.

black = np.zeros((img.shape[0], img.shape[1]))
get_contours(img_dilated, black)
cv2_imshow(black)

Now, lets go back to our code. Once we extracted our contours we need to find out which one we want to process. So, we need to calculate the area of all contours and to sort them from largest to smallest. To do that we can use the function cv2.contourArea(). We will create a for loop and we will iterate through all detected contours. It is important to set an experimental number that will determine the contour area. For example, let’s say that we have the contour of a square with dimensions \(100\times100 \) pixels. The area of that square will be 10.000 pixels. Then we can set the experimental number to 9000. That means that all contours with an area larger than 9000 (like our square) will be drawn, and all contours with an area smaller than 9000 pixels will not be drawn. To draw contours we will use the function cv2.drawContours().

To reduce the noise of detected contours we need to approximate curves of contours using cv2.arcLength()and cv2.approxPolyDP()functions. The first method calculates the length of a curve. We will pass the parameter cnt that represents points of our contours. Parameter True means that the contour is closed. The second method approximates the type of shape of a contour. It is important to note that the function cv2.approxPolyDP() consists of a parameter that determines the level of approximation precision. More specifically it is a value that determines the percentage of points of the contour that we are going to use. Finally, with the function cv2.putText we will write the number of points above the detected contour.

After contours are drawn, we will create another function called find_shape()that will determine the shape of the contour. Let’s see how we can do that. As we already explained the function cv2.approxPolyDP() approximates the percentage of points of the contour that we are going to use. So, based on that number we can determine the shape of the polygonal curve. For example, if that variable is equal to 3 we can say that that shape is a triangle. Also, we need to find coordinates of the contours in order to write text below the object. For that, we will use the cv2.boundingRect()function which will highlight our region of interest.

def find_shape(approx):
  x, y, w, h = cv2.boundingRect(approx)
  if len(approx) == 3:
    s = "Triangle"
  
  elif len(approx) == 4:
    calculation = w / float(h)
    if calculation >= 0.95:
      s = "Square"
    else:
      s = "Rectangle"
    
  elif len(approx) == 5:
    s = "Pentagon"
  
  elif len(approx) == 8:
    s = "Octagon"
  
  else:
    s = "Circle"
  
  return s, x, y, w, h

Finlay, we can use our get_contouts() function to display detected contours.

get_contours(img_dilated, img_contour)
cv2_imshow(img_contour)

Contour approximation

Now, let’s explain the function cv2.approxPolyDP() in more detail. To visualize the polygonal curve of an object we will again use the map of Africa.

!wget http://4.bp.blogspot.com/-wkV5TsNjNjc/UlwWE5j7AsI/AAAAAAAAAIg/VumvxpCNTJw/s1600/Africa-outline-map.jpg -O map.png
img = cv2.imread("map.jpg")
cv2_imshow(img)
OpenCV contours approximation

With the following code, we will detect the contours on this image. We created for loop in which we iterate over different values for the parameter in the cv2.approxPolyDP()function that determines the level of approximation precision. .

img = cv2.imread("map.png")
img_contour = img.copy()

img_blur = cv2.GaussianBlur(img, (7, 7), 1)
img_gray = cv2.cvtColor(img_blur, cv2.COLOR_BGR2GRAY)

img_canny = cv2.Canny(img_gray, 50, 190)

kernel = np.ones((2))
img_dilated = cv2.dilate(img_canny, kernel, iterations=1)

def get_contours(img, img_contour):
  contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  fig = plt.figure(figsize=(12, 8))
  cnt = contours[0]

  for i in range(50):
    img_c = img_contour.copy()
    approx = cv2.approxPolyDP(cnt, i * 1, True)
    cv2.drawContours(img_c, [approx], -1, (255, 0, 255), 2)
    plt.imshow(img_c)
    plt.axis('off')
    plt.savefig(f"image_0{i}.png")

  return approx, img_contour, contours, cnt

get_contours(img_dilated, img_contour);

Next, we created the GIF animation that will show different approximations for the polygonal curve.

import imageio
images = []

for i in range(50):
    data = imageio.imread(f'image_0{i}.png')
    images.append(data)

imageio.mimwrite("animation.gif", images, format= '.gif', fps = 3)
OpenCV contours approximation

As you can see in the first iteration we will get a larger number of lines. In case we want to extract a more simple shape, we don’t need all these lines. So, we can use the function cv2.approxPolyDP()to create a more simple shape for further processing. However, notice that in this example we detected contours of convex objects.  It is an object in which no line segment between two points on the boundary ever goes outside the polygon. Let’s see now how we can detect an object that is not convex. For this example, we will use the same image of an Africa, and we will use the convex hull of some points to locate the boundary of an object in the scene.

Detecting contours of non-convex objects

First, let’s remind ourselves what are convex and non-convex objects. A convex object is a polygon in which line drawn between two points on the boundary never goes outside the polygon. In a this object, all interior angles are less than or equal to 180 degrees. On the other hand, in a non-convex object we can draw a line between two points which will go outside the polygon. This object consists of one or more interior angles greater than 180 degrees.

To detect the contours of such an object, sometimes we need to find the convex hull of that shape. The convex hull is the set of pixels included in the smallest convex polygon that surrounds the boundary of a given shape. In the case of a convex object, the convex hull is just a boundary. To better visualize this take a look at the following image.

So to extract the convex hull we will first load the image and create a clone of it. We need to convert the image into grayscale and we need to blur it. As before we will apply Canny edge detection and image dilation. Our output image should look like this.

img = cv2.imread("map.png")
img_contour = img.copy()

img_blur = cv2.GaussianBlur(img, (7, 7), 1)
img_gray = cv2.cvtColor(img_blur, cv2.COLOR_BGR2GRAY)

img_canny = cv2.Canny(img_gray, 50, 190)

kernel = np.ones((2))
img_dilated = cv2.dilate(img_canny, kernel, iterations=1)
cv2_imshow(img_dilated)

Again, we will use the OpenCV function findContours() to detect contours. Then we will perform convexHull() function on every detected contour (in our case it is just one contour). As the last step, we will draw the convex hull. with the function cv2.drawContours().

Another useful technique that we need to learn is to compute the center (centroid) of a contour. We know that for simple convex objects like circle, square, triangle it is easy to find the centers. However, it is not such an easy task to find the centroid of some complex non-convex shape. To do that we need first to learn what is the centroid of a shape. It is the arithmetic mean of all the points in a shape. For example, if we have a shape of \(n \) distinct points \({x}_{i} \) the following formula:

$$ c=\frac{1}{n} \sum_{i=1}^{n} \mathbf{x}_{i} $$

We can calculate the centroid using the image moment. It is a particular weighted average of image pixel intensities. From these moments, we can extract useful data like area, centroid, etc. Furthermore, one of the simplest ways to compare two contours is to compute contour moments. We can define the moment of a contour as:

$$ m_{p, q}=\sum_{i=1}^{n} I(x, y) x^{p} y^{q} $$

Here \(p \) is the x-order and \(q \) is the y-order. The term order means the power to which the corresponding component is taken in the summation. The summation is over all of the pixels of the contour boundary (denoted by \(n \) in the equation). It then follows immediately that if \(p \) and \(q \) are both equal to 0, then the \(M00 \) moment is actually just the length in pixels of the contour.

To find the centroid of the image, we will first convert the image into binary format, and then after calculating the moments we will find its center. The centroid is given by these relations:

$$ C_{x}=\frac{M_{10}}{M_{00}} $$

$$ C_{y}=\frac{M_{01}}{M_{00}} $$

This can be implemented in the Python code using the OpenCV function cv2.moments().

def get_contours(img, img_contour):
  contours, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

  cnt = contours[0]
  hull = cv2.convexHull(cnt)

  cv2.drawContours(img_contour, [hull], -1, (255, 0, 255), 2)

  M = cv2.moments(cnt)

  cx = int(M['m10']/M['m00'])
  cy = int(M['m01']/M['m00'])

  cv2.circle(img_contour, (cx, cy), 20, (255,0,255))
get_contours(img_canny, img_contour)
cv2_imshow(img_contour)

Summary

In this post, we have learned how to use different OpenCV functions to extract the contours in an image. We have learned how to detect shapes like lines an circles with the Hough transform and we explained how to approximate detected contours. In the next post, we will talk about image segmentation.

[1] Find the Center of a Blob (Centroid) using OpenCV (C++/Python)

[2] Bradski, Gary, and Adrian Kaehler. Learning OpenCV: Computer vision with the OpenCV library. ” O’Reilly Media, Inc.”, 2008.