By Adam Taylor

Following on our examination of how we can use the HLS video libraries, our next step is to understand how we store an image and the subtle difference between OpenCV and the HLS Video Libraries.

Different types of edge detection (Original, Laplacian of Gaussian, Canny and Sobel)

The most basic of OpenCV elements is the cv::mat class, which defines the image size in X and Y and pixel information (e.g. the number of bits within pixel), if the pixel data is signed or unsigned, and how many channels make up a pixel. This class creates the basis for how we store and manipulate images when we use OpenCV.

Within the HLS library there is a similar construct: the hls::mat. The library also providesa number of functions that enable conversion of the hls::mat class to and from HLS streaming. This is the standard interface we use when creating image-processing pipelines. One major difference between the cv::mat and the hls::mat classes is that the hls::mat class is defined as a stream of pixels as opposed to the cv::mat definition, which is a block of memory. This difference means that we do not have random access to pixels using hls::mat.

A simple example that demonstrates how we can use these libraries is to perform a simple Gaussian Blur of an image. The filter will use AXI Streaming interfaces to input and output the image data stream.

Gaussian blurring is typically applied to an image prior to many edge-detection embedded-vision algorithms that reduce noise within the image like Sobel or Canny.

The first step is to create the HLS structures we need within a header file so that both the module to be synthesised and the test bench can use them. These type definitions are:

HLS streaming interfaces: this makes using the conversion to and from AXI streams within the test bench easier.

typedef hls::stream<ap_axiu<16,1,1,1> > AXI_STREAM;

HLS mat type: if we are using RGB and YUV we will need to define different types

typedef hls::Mat<MAX_HEIGHT, MAX_WIDTH, HLS_8UC2> YUV_IMAGE;

typedef hls::Mat<MAX_HEIGHT, MAX_WIDTH, HLS_8UC3> RGB_IMAGE;

With the basics defined we are then in a position to generate the module we wish to synthesise and the test bench to check it is functioning.

Starting with the module we wish to synthesise, the video input and output from the module will use the previously defined AXI_STREAM type definition. While the size of the image in rows and columns will be supplied over an AXI-Lite interface, we can also use this interface if we want to provide the ability to enable or disable the filter.

Implementing the function we want is very simple. We need to convert the input video from an AXI Stream into an hls::mat, apply our filter, and then convert the output hls::mat back to an AXI Stream.

HLS Function to perform the Gaussian Blur

Having written the code we wish to synthesise and implement in the Zynq SoC, the next thing we need to do is create a test bench so that we can check the functionality using both C and Co-Simulation before we include the core within our Vivado design.

We will look at this next week, and we’ll also see how we can combine OpenCV and the HLS Libraries in our test bench.

Code is available on Github as always.

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

First Year E Book here

First Year Hardback here

Second Year E Book here

Second Year Hardback here

All of Adam Taylor’s MicroZed Chronicles are cataloged here.