If you are using the Zynq or Zynq MPSoC for image processing you will be aware we can use the xfOpenCV libraries. These libraries allow us to accelerate OpenCV image processing functions from the Processing System (PS) into the Programmable Logic (PL) seamlessly.

Input image (left) and xfOpenCV processed output (right)

There are some use cases, however, where we may want to implement xfOpenCV functions using an HLS-based flow and not SDSoC or SDAccel. For example, if we are implementing the image processing pipeline in an FPGA and not a heterogeneous SoC.

When we implement xfOpenCV blocks as standalone HLS blocks, we want to ensure interfaces which allow easy integration with our image processing chain are created. This means we want the standalone HLS block to have AXI Video Streaming Interfaces. To learn more in-depth about AXI Video Streaming interfaces and HLS, check out my past Hackster projects here.

To enable AXI Streaming, the xfOpenCV libraries offer the following conversion functions:

AXIvideo2xfMat — This converts from a AXI Stream to a XF::Mat

xfMat2AXIvideo — This converts from a XF::Mat to AXI Stream

These two functions are used for C Synthesis to provide a wrapper around the xfOpenCV function we wish to accelerate. For example:

Converting between AXI Stream and XF Mat formats.

Of course we also need to create test benches to stimulate and verify our accelerated functions. This requires the ability to convert to and from Open CV Mat functions and AXI Streams — xfOpenCV therefore provides the following functions:

cvMat2AXIvideoxf — Converts between CV::Mat and AXI Stream

AXIvideo2cvMatxf — Converts between AXI Stream and CV::Mat

Using these functions we can work with standard OpenCV image processing functions, letting us apply images and save the manipulated output images.

Having understood the interfacing, the next step is to use one of the xfOpenCV examples to create a HLS block.

For this example, we will use the Dilate example. The first thing we need to do is create a new project. As we desire AXI Streaming IP we will also need to create a file as above which acts as the wrapper. To the design sources add the wrapper file and the example dilation acceleration file.

Adding the sources for synthesis

We also need to add in specific C Flags for these files these are -D-XFCV_HLS_MODE_ and the path to the include libraries. For the top function, select the name of the wrapper function which contains the AXI stream conversions.

Adding in the C Flags

The next stage is to add in the test bench from the example, along with the image file we will be working with. Again we need to use the same C Flags. If there is an image file present too, add that in to the test bench files.

Setting the test bench configuration

The final step of the project creation is to select the component we wish to target.

Once the project is open, we should run C Simulation successfully before we move on to C Synthesis, Co-Simulation, and IP extraction.

If the project is created correctly, when you run C Simulation you will see no errors reported.

Results of the C Simulation

You can double check the images if you want by looking under the CSim/Build directory. Under here, you will see the xfOpenCV output image — the same function when implemented in OpenCV. A third image is provided which shows the differences between the xfOpenCV and the OpenCV functions.

Output files from C Simulation

Co-Simulation will provide the same set of files as well.

Once we are satisfied with the C Simulation, we should synthesise the design, verify the performance using Co-Simulation.

Utilisation Statistics following C Synthesis

The final step is to implement the xfOpenCV function in our image processing chain. To do this we need to export the HLS IP block and import it into Vivado for use.

HLS IP Block available within Vivado

Following importation into the IP Catalog, we are able to use the new xfOpenCV core within IP Integrator in Vivado.

xfOpenCV block within for use in the image processing block.

Examining the IP block in IP Integrator you will notice there are several ports. Both the P_Src and P_dst are the AXI streaming image input and outputs. The ap_cntrl ports enable us to control the blocks operation, and most importantly start the block.

Height and width allow the height and width of the image to be processed to be defined, while the kernel ports are provided such that the filter Kernel can also be defined.

Now we know how to use the range of very useful xfOpenCV functions, in an FPGA-based image processing design.

See My FPGA / SoC Projects: Adam Taylor on Hackster.io

Get the Code: ATaylorCEngFIET (Adam Taylor)

Access the MicroZed Chronicles Archives with over 250 articles on the Zynq / Zynq MpSoC updated weekly at MicroZed Chronicles.