Image Keypoint Descriptors and Matching

Posted on August 17, 2017

0


[GitHub]

Extracting keypoints from images, usually, corner points etc is usually the first step for geometric methods in computer vision. A typical workflow is: keypoints are extracted from images (SIFT, SURF, ORB etc.). At these keypoints descriptors are extracted (SURF, ORB etc). Usually a 32D vector at each keypoint. The nearest neighbor search is performed to get candidate matches. The Lowe’s ratio is performed to eliminate non-matches. Further, an Essential matrix is computed to know about geometrical matches.

Although all this is rather easy stuff with code easily available on the internet. I have compiled code snippets for the workflow described above.  My snippets use OpenCV3.0+. Code available for both Python and C++.

I plan to expand this repo to include samples of mundane geometric vision tasks. Any help appreciated!

[GitHub]

Advertisements