A master wordsmith can tell a heart breaking story in just a few words.

For sale: baby shoes, never worn.

A great artist can do so much with so little! The same holds true for great programmers and engineers. They always seem to eek out that extra ounce of performance from their machines. This is what often differentiates a great product from a mediocre one and an exceptional programmer from a run of the mill coder. Such mastery appears magical, but dig a bit deeper and you will notice that the knowledge was available to everyone. Few chose to utilize it.

In this post we will unlock the most easy and probably the most important performance trick you can use in OpenCV 3. It is called the Transparent API ( T-api or TAPI ).

What is Transparent API ( T-API or TAPI ) ?

The Transparent API is an easy way to seamlessly add hardware acceleration to your OpenCV code with minimal change to existing code. You can make your code almost an order of magnitude faster by making a laughably small change.

Using Transparent API is super easy. You can get significant performance boost by changing ONE keyword.

Don’t trust me ? Here is an example of standard OpenCV code that does not utilize the transparent API. It reads an image, converts it to grayscale, applies Gaussian blur, and finally does Canny edge detection.

C++

#include "opencv2/opencv.hpp" using namespace cv; int main(int argc, char** argv) { Mat img, gray; img = imread("image.jpg", IMREAD_COLOR); cvtColor(img, gray, COLOR_BGR2GRAY); GaussianBlur(gray, gray,Size(7, 7), 1.5); Canny(gray, gray, 0, 50); imshow("edges", gray); waitKey(); return 0; }

Python

import cv2 img = cv2.imread("image.jpg", cv2.IMREAD_COLOR) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, (7, 7), 1.5) gray = cv2.Canny(gray, 0, 50) cv2.imshow("edges", gray) cv2.waitKey();

Let’s see how the same code looks with Transparent API.

OpenCV Transparent API example

I have modified the code above slightly to utilize the Transparent API. The difference between the standard OpenCV code and one utilizing TAPI is highlighted below. Notice that all we had to do was to copy the Mat image to UMat ( Unified Matrix ) class and use standard OpenCV functions thereafter.

C++

#include "opencv2/opencv.hpp" using namespace cv; int main(int argc, char** argv) { UMat img, gray; imread("image.jpg", IMREAD_COLOR).copyTo(img); cvtColor(img, gray, COLOR_BGR2GRAY); GaussianBlur(gray, gray,Size(7, 7), 1.5); Canny(gray, gray, 0, 50); imshow("edges", gray); waitKey(); return 0; }

Python

import cv2 img = cv2.UMat(cv2.imread("image.jpg", cv2.IMREAD_COLOR)) imgUMat = cv2.UMat(img) gray = cv2.cvtColor(imgUMat, cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, (7, 7), 1.5) gray = cv2.Canny(gray, 0, 50) cv2.imshow("edges", gray) cv2.waitKey();

On my Macbook Pro this small change makes the code run 5x faster.

Note: It makes sense to use the Transparent API only when you are doing a few expensive operations on the image. Otherwise the overhead of moving the image to the GPU dominates the timing.

Let us quickly summarize the steps needed to use transparent API

Convert Mat to UMat. There are a couple of ways of doing this in C++. C++ Mat mat = imread("image.jpg", IMREAD_COLOR); // Copy Mat to UMat UMat umat; mat.copyTo(umat); Alternatively, you can use getUMat Mat mat = imread("image.jpg", IMREAD_COLOR); // Get umat from mat. UMat umat = mat.getUMat( flag ); Python mat = cv2.imread("image.jpg", cv2.IMREAD_COLOR) umat = cv2.UMat(mat) flag can take values ACCESS_READ, ACCESS_WRITE, ACCESS_RW and ACCESS_FAST. At this point it is not clear what ACCESS_FAST does, but I will update this post once I figure it out. Use standard OpenCV functions that you would use with Mat. If necessary, convert UMat back to Mat.. Most of the time you do not need to do this. Here is how you do it in case you need to.

C++ Mat mat = umat.getMat( flag ); Python mat = cv2.UMat.get(umat) where umat is a UMat image. flag is the same as described

above.

Now we know how to use the Transparent API. So what is under the hood that magically improves performance ? The answer is OpenCL. In the section below I briefly explain OpenCL.

What is Open Computing Language (OpenCL) ?

If you are reading this article on a laptop or a desktop computer, it has a graphics card ( either integrated or discrete ) connected to the CPU, which in turn has multiple cores. On the other hand, if you are reading this on a cell phone or tablet, your device probably has a CPU, a GPU, and a Digital Signal Processor ( DSP ). So you have multiple processing units that you can use. The fancy industry words for your computer or mobile device is “heterogeneous platform”.

OpenCL is a framework for writing programs that execute on these heterogenous platforms. The developers of an OpenCL library utilize all OpenCL compatible devices (CPUs, GPUs, DSPs, FPGAs etc) they find on a computer / device and assign the right tasks to the right processor. Keep in mind that as a user of OpenCV library you are not developing any OpenCL library. In fact you are not even a user of the OpenCL library because all the details are hidden behind the transparent API.

What is the difference between OCL Module and Transparent API ?

Short answer : The OCL module is dead. Long live the Transparent API!

OpenCL was supported in OpenCV 2.4 via the OCL module. There were a set of functions defined under the ocl namespace that you could use to call the underlying OpenCL code. Below is an example for reading an image, and using OpenCL to convert it to grayscale.

// Example for using OpenCL is OpenCV 2.4 // In OpenCV 3 the OCL module is gone. // It is replaced by the much nicer Transparent API // Initialize OpenCL std::vector<ocl::Info> param; ocl::getDevice(param, ocl::CVCL_DEVICE_TYPE_GPU); // Read image Mat im = imread("image.jpg"); // Convert it to oclMat ocl::oclMat ocl_im(im); // Container for OpenCL gray image. ocl::oclMat ocl_gray; // BGR2GRAY using OpenCL. cv::ocl::cvtColor( ocl_im, ocl_gray, CV_BGR2GRAY ); // Container for OpenCV Mat gray image. Mat gray; // Convert back to OpenCV Mat ocl_gray.download(gray);

As you can see it was a lot more cumbersome. With OpenCV 3 the OCL module is gone! All this complexity is hidden behind the so-called transparent API and all you need to do is use UMat instead of Mat and the rest of the code remains unchanged. You just need to write the code once!

Subscribe

If you liked this article, please subscribe to our newsletter and receive a free

Computer Vision Resource guide. In addition to Computer Vision & Machine Learning news we share OpenCV tutorials and examples in C++/Python.

Subscribe Now