Introduction

PYNQ gives us the ability to quickly and easily create complex image processing solutions which leverage open source frameworks available via the python ecosystem.

In this project I am going to demonstrate how we can create our own PYNQ overlay for the Snickerdoodle to capture video from a HDMI camera.

Once these images have been captured we can then use a range of open source frameworks to post process to the image received. In this application we are going to extract and decode bar-codes present within the image.

Getting Started

For this application we are going to be using the Snickerdoodle board combined with the Pi Smasher.

To get started with the development we will need the following

These files will enable us to create our own PYNQ overlay which can connect to and receive images from a external HDMI camera.

Download the board files and install these within Vivado boards directory, this will enable us to create Vivado solutions configured for the snickerdoodle.

Snickerdoodle GitHub

To be able to capture images from the HDMI camera we need to ensure to colorspace and memory is packed correctly to work with the PYNQ drivers.

PYNQ provides the necessary IP blocks, these blocks are available under the boards/ip/hls directory of PYNQ Repository

The IP blocks we want to use are provided as HLS IP blocks which means to use them we need to first run Vivado HLS to create the output RTL.

Within the IP/HLS directory you will see batch files which can be run on either Windows or Linux. To run either of these make sure the necessary environment variables are set.

Running the appropriate batch file will generate the RTL output we can use in our Vivado design.

To make use of these designs within our Vivado design we can add them as an IP repository.

With the IP repository installed we are able to create our overlay design. the first thing we need to do is add in the Zynq Processing System and run the block automation to configure the PS for the Snickerdoodle.

To create the block diagram we need to add in the following

Video In to AXI Stream - Converts parallel video to AXI Stream

Color Convert - Converts color space, no customization required in Vivado

Pixel Pack - Packs 24 bits pixels to 32 bit pixels, no customization required in Vivado

VDMA - Reads and Writes Video AXI Stream to the processor memory map

VDMA Configurations

VDMA configuration

ILA - Configured to monitor AXI Streams

Clocking architecture, of the design is very simple with two clock domains.

Clock domain 1 = AXI Lite and Stream running at 100 MHz

Clock domain 2 = Pixel clock

With the design is completed we can build the bitstream and create the overlay.

Overlay Creation

To create the overlay we need the following files

Bitstream

TCL block diagram description

Hardware hand off file

__init__.py

overlay.py - contains the overlay class definition

The TCL block diagram, Hardware handoff file and Bitstream are created by Vivado

The remaining python files we need to create by hand

__init__.py

from .sd_tpg import sd_tpgOverlay

overlay.py

import pynq

from pynq import GPIO

__author__ = "Adam Taylor"

__copyright__ = "Copyright 2020, Adiuvo"

__email__ = "Adam@adiuvoengineering.com"

class sd_tpgOverlay(pynq.Overlay):

""".

"""

def __init__(self, bitfile, **kwargs):

super().__init__(bitfile, **kwargs)

if self.is_loaded():

pass

With these created we are able to upload the overlay to the Snickerdoodle.

HDMI Configuration

However, before we can run the overlay we need to first configure the HDMI receiver. This is control and configured over I2C connected to the Zynq PS, by default the HDMI RX is powered down.

To power up the HDMI RX we need to drive the PS GPIO Pin 53 to a high state, this can be easily achieved using PYNQ.

Once powered up the HDMI RX should be visible in the I2C network, the HDMI RX has dual I2C address at 0x34 and 0x48.

To be able to configure the HDMI device over the I2C bus, the next step is to clone the Pi Smasher software repository.

Within this repository is the source files required to configure the HDMI Rx and TX.

Pi Smasher Repository

Once the repository has been cloned, create a new SDK Linux application and add in the source files from the HDMI Config and the I2C and TDA1997 and TDA998 source files.

Creating the SDK project

This allows us to create an application which is capable of configuring both the RX and TX HDMI devices. As we are only going to be using the HDMI RX in this project comment out the configuration of the HDMI Tx.

When we run this on the Snickerdoodle with the HDMI Rx enabled we will see the HDMI Rx device is correctly configured.

We will embedded this application within our custom overlay.

Capturing an Image

With the HDMI Rx configured and the overlay loaded onto the Snickerdoodle we are now in a position to create the application.

Within a new notebook we can create the following

import time

from pynq.overlays.sd_tpg import sd_tpgOverlay

import numpy as np

from pynq import pl

from pynq import overlay

from pynq.lib.video import *

import cv2

import matplotlib.pyplot as plt

from pynq import GPIO

from pyzbar import pyzbar

overlay = sd_tpgOverlay('sd_tpg.bit')



gpio = GPIO.get_gpio_base()

gpio = gpio + 53

output =GPIO(gpio,'out')

output.write(1)



%%bash

./hdmi_config.elf -m 1280x720



pixel_in = overlay.pixel_pack_0

pixel_in.bits_per_pixel = 24

colourspace_in = overlay.color_convert_0

rgb2bgr = [0.0, 1.0, 0.0,

1.0, 0.0, 0.0,

0.0, 0.0, 1.0,

0.0, 0.0, 0.0]

colourspace_in.colorspace = rgb2bgr



cam_vdma = overlay.axi_vdma_0

framemode = VideoMode(1280, 512, 24)

cam_vdma.readchannel.mode = framemode

cam_vdma.readchannel.start()



frame_camera = cam_vdma.readchannel.readframe()

frame_color=cv2.cvtColor(frame_camera,cv2.COLOR_BGR2RGB)

pixels = np.array(frame_color)

plt.imshow(pixels)

plt.show()

Running the over in a notebook will enable us to capture images from the HDMI Camera

Once we have this image we are then able to post process it, and extract information from the image.

For this application we are going to extract information contained in bar codes within the image.

To do this we are going to use a the library ZBAR this allows us to read information from bar-codes within the image

We can install ZBAR using the following approach

sudo apt-get install libzbar0

Once this is installed we can extract information from bar-codes contained within the image.

The algorithm I used is defined below. Once the image is captured I convert it to grayscale before thesholding the image and running the bar-code detection algorithm on the threshold image.

image = cv2.imread("img.jpg")

image =cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)

cv2.imwrite('gray.jpg', image)

ret,thresh1 = cv2.threshold(image,100,255,cv2.THRESH_BINARY)

blur = cv2.GaussianBlur(thresh1,(5,5),0)

cv2.imwrite('thres.jpg', blur)

barcodes = decode(blur)



# loop over the detected barcodes

for barcode in barcodes:

# extract the bounding box location of the barcode and draw the

# bounding box surrounding the barcode on the image

(x, y, w, h) = barcode.rect

cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)

# the barcode data is a bytes object so if we want to draw it on

# our output image we need to convert it to a string first

barcodeData = barcode.data.decode("utf-8")

barcodeType = barcode.type

# draw the barcode data and barcode type on the image

text = "{} ({})".format(barcodeData, barcodeType)

cv2.putText(image, text, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX,

0.5, (0, 0, 255), 2)

# print the barcode type and data to the terminal

print("[INFO] Found {} barcode: {}".format(barcodeType, barcodeData))

# show the output image

cv2.imwrite('final.jpg', image)

Thresholded Image

Output Image

The result from the bar-code decoding is a presented as text list with the decoded information and a list of points where the bar code was decoded.

If the application also runs through the application and

Installing the Overlay

I have wrapped up the design into a overlay which can be downloaded and installed from my GitHub account. Provided you have the necessary python libraries installed e.g. ZBAR

You can install the application using the command line prompt

sudo pip3 install -- upgrade git+https://github.com/ATaylorCEngFIET/pynq_sd_image_processing

You can also find the hardware design and the SW design for the HDMI Rx here https://github.com/ATaylorCEngFIET/vivado_sd_image_processing-

Wrap Up

This project has shown how we can easily work with PYNQ for image processing applications. This presents us several advantages

Ability to verify image input path

Ability to work with high level open source frame works

Ability to update overlay to accelerate the image processing applications is required

Capability to use programmable logic to implement bespoke camera interfaces if required.

See previous projects here.

Additional information on Xilinx FPGA / SoC development can be found weekly on MicroZed Chronicles.