[GitHub]

Extracting keypoints from images, usually, corner points etc is usually the first step for geometric methods in computer vision. A typical workflow is: keypoints are extracted from images (SIFT, SURF, ORB etc.). At these keypoints descriptors are extracted (SURF, ORB etc). Usually a 32D vector at each keypoint. The nearest neighbor search is performed to get candidate matches. The Lowe’s ratio is performed to eliminate non-matches. Further, an Essential matrix is computed to know about geometrical matches.

Although all this is rather easy stuff with code easily available on the internet. I have compiled code snippets for the workflow described above.  My snippets use OpenCV3.0+. Code available for both Python and C++.

I plan to expand this repo to include samples of mundane geometric vision tasks. Any help appreciated!

[GitHub]

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s