Computing descriptors for image keypoints - SURF, BRIEF, ORB
In the previous recipes, we've examined several ways of finding keypoints in the image. Basically, keypoints are just locations of extraordinary areas. But how do we distinguish between these locations? This question arises in many situations, especially in video processing, when we want to track an object in a sequence of frames. This recipe covers some effective approaches of characterizing keypoint neighborhoods, in other words, computing keypoint descriptors.
Getting ready
Before you proceed with this recipe, you need to install the OpenCV version 3.0 (or greater) Python API package with contrib modules.
How to do it...
You need to complete the steps:
- Import the modules we need and load an image:
import cv2 import numpy as np img = cv2.imread('../data/scenetext01.jpg', cv2.IMREAD_COLOR)
- Create a SURF feature detector and tune some of its parameters. Then, apply it to the loaded image and display the result:
surf = cv2.xfeatures2d...