By Adam Taylor

As I mentioned last week, we are going to be spending some time looking at how we can use the Zynq SoC for embedded vision applications. The Zynq SoC combines processors and programmable logic, which allow us to create very powerful image-processing systems. We can use the PL (programmable logic) side of the device to create the image chain to speed up video-processing algorithms and the PS (processor system) side for control and communication with the outside world.

I am going to be using the Avnet Embedded Vision Kit to explore this topic. The kit we will be using includes:

The MicroZed SoM with a Xilinx Zynq XC7Z020 device Embedded Vision Carrier Card (EVCC) – This is the heart of the embedded vision system. It provides both HDMI input and outputs, Ethernet (including PoE), and line in and out. It will enable us to create very impressive image-processing systems. On Semi Python 1300-C Camera Module Flexible Camera Stand to hold the embedded vision kit

After unpacking, you need to assemble the system. The first thing to do is to mate the camera module with the lens while taking care to ensure there is no ESD damage to the sensor itself.

On Semi Python 1300-C Camera Module

Camera Module assembled

Once the camera module is assembled, the next step is to integrate this module with the embedded vision carrier card. The camera module slots into the connector on the front of the EVCC and is then secured by two bolts and nuts at the top, as shown below:

Camera Module Integrated with the EVCC

Once the camera module is installed, the next step is to download a reference design from ZedBoard.Org and copy the included boot image to an SD card. The reference design boot image allows us prove that everything is working correctly. After we confirm that everything’s working, we can focus on enhancing the kit’s abilities with our own code.

Prior to installing the reference boot file on the EVCC using an SD card, it is good practice to ensure that the MicroZed is configured to boot from the SDCard and not the QSPI flash on board. If it’s all correct, mount the MicroZed on the EVCC and connect it to a power supply and video monitor.

MicroZed Mounted on the EVCC

When I turned it on this assembly for the first time and focused the lens within my study, I got the image below, which shows that everything is functional. (Sort of a video version of “hello world”):

With this all up and running we can start to look at what we need to do to enhance the image processing. The first step is to download the reference design files from the Avnet github. This will provide us a reference to look at and build upon over the next few weeks and months.

We will start with explaining how the reference design works—particularly how we interface to the CMOS image sensor design and generate the HDMI output. Going forward we will be exploring how we can use Vivado HLx to generate image-processing filters and how we can also create image-processing platforms using the Xilinx SDSoC design environment.

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

First Year E Book here

First Year Hardback here

Second Year E Book here

Second Year Hardback here

You also can find links to all the previous MicroZed Chronicles blogs on my own Web site, here.