When I first became interested in computer vision and image search engines over eight years ago, I had no idea where to start. I didn’t know which language to use, I didn’t know which libraries to install, and the libraries I found I didn’t know how to use. I WISH there had been a list like this, detailing the best libraries to use for image processing, computer vision, and image search engines.

This list is by no means complete or exhaustive. It’s just my favorite Python libraries that I use each and everyday for computer vision and image search engines. If you think I’ve left an important one out, please leave me a note in the comments or send me an email.

For Starters:

NumPy is a library for the Python programming language that (among other things) provides support for large, multi-dimensional arrays. Why is that important? Using NumPy, we can express images as multi-dimensional arrays. For example, let’s say we downloaded our favorite grumpy cat image from Google and we now want to represent it as an array using NumPy. This image is 452×589 pixels in the RGB color space, where each pixel is consists of three components: a red value, a green value, and a blue value. Each pixel value p is in the range 0 <= p <= 255. We then have a matrix of 452×589 such pixels. Using NumPy, we can conveniently store this image in a (452, 593, 3) array, with 452 rows, 593 columns, and 3 values, one for each pixel in the RGB colorspace. Representing images as NumPy arrays is not only computational and resource efficient, but many other image processing and machine learning libraries use NumPy array representations as well. Furthermore, by using NumPy’s built-in high-level mathematical functions, we can quickly perform numerical analysis on an image.

Going hand-in-hand with NumPy, we also have SciPy. SciPy adds further support for scientific and technical computing. One of my favorite sub-packages of SciPy is the spatial package which includes a vast amount of distance functions and a kd-tree implementation. Why are distance functions important? When we “describe” an image, we perform feature extraction. Normally after feature extraction an image is represented by a vector (a list) of numbers. In order to compare two images, we rely on distance functions, such as the Euclidean distance. To compare two arbitrary feature vectors, we simply compute the distance between their feature vectors. In the case of the Euclidean distance, the smaller the distance the more “similar” the two images are.

Simply put, matplotlib is a plotting library. If you’ve ever used MATLAB before, you’ll probably feel very comfortable in the matplotlib environment. When analyzing images, we’ll make use of matplotlib, whether plotting the overall accuracy of search systems or simply viewing the image itself, matplotlib is a great tool to have in your toolbox.

4. PIL and Pillow

I don’t have anything against PIL or Pillow, don’t get me wrong, they are very good at what to do: simple image manipulations, such as resizing, rotation, etc. Overall, I just find the syntax clunky. That being said, many non-scientific Python projects utilize PIL or Pillow. For example, the Python web-framework Django uses PIL to represent an ImageField in a database. PIL and Pillow have their place if you need to do some quick and dirty image manipulations, but if you’re serious about learning about image processing, computer vision, and image search engines, I would highly recommend that you spend your time playing with OpenCV and SimpleCV instead.

My Go-To’s:

If NumPy’s main goal is large, efficient, multi-dimensional array representations, then, by far, the main goal of OpenCV is real-time image processing. This library has been around since 1999, but it wasn’t until the 2.0 release in 2009 did we see the incredible NumPy support. The library itself is written in C/C++, but Python bindings are provided when running the installer. OpenCV is hands down my favorite computer vision library, but it does have a learning curve. Be prepared to spend a fair amount of time learning the intricacies of the library and browsing the docs (which have gotten substantially better now that NumPy support has been added). If you are still testing the computer vision waters, you might want to check out the SimpleCV library mentioned below, which has a substantially smaller learning curve.

The goal of SimpleCV is to get you involved in image processing and computer vision as soon as possible. And they do a great job at it. The learning curve is substantially smaller than that of OpenCV, and as their tagline says, “it’s computer vision made easy”. That all said, because the learning curve is smaller, you don’t have access to as many of the raw, powerful techniques supplied by OpenCV. If you’re just testing the waters, definitely try this library out. However, as fair warning, over 95% of my code examples in this blog will be with OpenCV. Don’t worry though, I’m absolutely meticulous when it comes to documentation and I’ll provide you with complete, yet concise, explantations of the code.

Mahotas, just as OpenCV and SimpleCV, rely on NumPy arrays. Much of the functionality implemented in Mahotas can be found in OpenCV and/or SimpleCV, but in some cases, the Mahotas interface is just easier to use, especially when it comes to their features package.

Alright, you got me, Scikit-learn isn’t an image processing or computer vision library — it’s a machine learning library. That said, you can’t have advanced computer vision techniques without some sort of machine learning, whether it be clustering, vector quantization, classification models, etc. Scikit-learn also includes a handful of image feature extraction functions as well.

I’ll be honest. I’ve never used ilastik. But through my experiences at computer vision conferences, I’ve met a fair amount of people who do, so I felt compelled to put it in this list. Ilastik is mainly for image segmentation and classification and is especially geared towards the scientific community.

BONUS:

I couldn’t stop at just nine. Here are three more bonus libraries that I use all the time.

Extracting features from images is inherently a parallelizable task. You can reduce the amount of time it takes to extract features from an entire dataset by using a multiprocessing library. My favorite is pprocess, due to the simple nature I need it for, but you can use your favorite.

The h5py library is the de-facto standard in Python to store large numerical datasets. The best part? It provides support for NumPy arrays. So, if you have a large dataset represented as a NumPy array, and it won’t fit into memory, or if you want efficient, persistent storage of NumPy arrays, then h5py is the way to go. One of my favorite techniques is to store my extracted features in a h5py dataset and then apply scikit-learn’s MiniBatchKMeans to cluster the features. The entire dataset never has to be entirely loaded off disk at once and the memory footprint is extremely small, even for thousands of feature vectors.

On the first draft of this blog post, I completely forgot about scikit-image. Silly me. Anyway, scikit-image is fantastic, but you have to know what you are doing to effectively use this library — and I don’t mean this in a “there is a steep learning curve” type of way. The learning curve is actually quite low, especially if you check out their gallery. The algorithms included in scikit-image (I would argue) follow closer to the state-of-the-art in computer vision. New algorithms right from academic papers can be found in scikit-image, but in order to (effectively) use these algorithms, you need to have developed some rigor and understanding in the computer vision field. If you already have some experience in computer vision and image processing, definitely check out scikit-image; otherwise, I would continue working with OpenCV and SimpleCV to start.

In Summary:

NumPy provides you with a way to represent images as a multi-dimensional array. Many other image processing, computer vision, and machine learning libraries utilize NumPy so it’s paramount to have it (and SciPy) installed. While PIL and Pillow are great for simple image processing tasks, if you are serious about testing the computer vision waters, your time is better spent playing with SimpleCV. Once you’ve convinced yourself that computer vision is awesome, install OpenCV and re-learn what you did in SimpleCV. Over 95% of the code examples I will show you in this blog will be in OpenCV. Finally, install scikit-learn and h5py. You won’t need them just yet. But you’ll love them once I show you what they can do.