Many of the projects I work on for clients are based on embedded vision and image processing applications. One of the things I like about these projects is I get to see the results, very visibly, and hopefully see these improve as the project matures.

When it comes to displaying the image, there are several potential standards which can be used. However, if I am using a Zynq MPSoC device, I often try to use the DisplayPort Controller in the PS, as it is both very powerful and convenient.

Completed block diagram

If you have not come across the DisplayPort Controller before, as we will see it is very capable and supports resolutions up to ultra high-definition at up to 30 frames per second. Although in this example, we are going to be limited by the supported resolutions of my DisplayPort Monitor.

When it comes to working with the DisplayPort Controller, the output video can be generated from either the PS DDR memory or from the programmable logic (PL). In the MPSoC DisplayPort Terminology, these are called non-live and live image feeds respectively. A simple example of a non-live video would be a Linux generating a desktop, while a live video example could result from a image processing chain in the PL.

The DisplayPort Controller is able to support up to six non-live sources from the PS DDR memory. It is also capable of supporting live video and graphics from the PL, of course, we can also mix both the non-live and live video feeds.

To be able to support a wide range of applications, the DisplayPort Controller supports 6, 8, 10 or 12 bits per component, while also supporting a range of color spaces including RGB, YCrCb 4:2:2, YCrCb 4:4:4 and YCrCb 4:2:0.

What is really exciting is the additional capabilities provided by the DisplayPort Controller:

Chroma sampling and sub sampling

Color space conversion

Alpha blending

Audio mixer

Both the final blended video and audio are optionally available to the programmable logic if desired.

DisplayPort controller architecture (📷: UG1085)

Physically within the Zynq MPSoC, the DisplayPort controller is located in the processing system (PS).

Flexibility is key in image processing solutions and the DisplayPort Controller enables us to move data in four possible directions — memory to data path, PL to data path, PL to PL, and memory to PL.

How we configure the DisplayPort Controller for these modes will be examined when we look at the SW creation.

For this example, we are going to generate a test pattern in the PL and output it using the DisplayPort Controller.

This means we need to make the DisplayPort Controller live feed input available to the design in the PL.

We do this by re-customizing the MPSoC and under the PS-PL configuration -> General -> Others and set the Live Video

Enabling the live video input on the MPSoC

For this example, I am using the Ultra 96 V2, as such the DisplayPort output should be configured as shown below.

Ultra96 DisplayPort output configuration

To generate the video output, we will be using the following IP blocks from the Vivado IP catalog.

Test Pattern Generator — This will generate the actually test pattern to be output. We will control this using the software we run in the processing system over the AXI Lite interface.

Video Timing Controller— This will generate the output video timing, it will be configurable using SW over the AXI Lite Interface

AXI Stream to Video Out — This will convert the test pattern received as a AXI Stream into a video signal with appropriate timing thanks to the Video Timing Controller. To ensure the pixel width align with the DisplayPort controller set this to 12 bits per pixel.

ILA — Two integrated logic analyzers are present to ensure the test pattern Generator correctly generates the test image and the AXI Stream to video out, locks correctly.

Clock Wizard — This generates the pixel clock frequency of 74.250 MHz, which is the pixel clock for 1280 by 720 display at 60 Hz.

As mentioned in the AXI Stream to video out point above, we need to ensure we correctly align the Video with the DisplayPort Controller live video input.

The table below shows the pixel format for each supported color space.

Pixel format vs. color space

When I pulled all this together, the block diagram below was created for implementation. With the exception of the Pixel Clock setting and connections, it was all interconnected using the connection automation and block automation wizards.

Completed block diagram

Implementing this design shows we use only a very small logic foot print, which is ideal as it means we can use the remainder to implement the image processing pipeline.

Now that we have the hardware design, we will look next time at the software framework needed to get this up and running and showing an image on the display.

See My FPGA / SoC Projects: Adam Taylor on Hackster.io

Get the Code: ATaylorCEngFIET (Adam Taylor)

Access the MicroZed Chronicles Archives with over 300 articles on the FPGA / Zynq / Zynq MpSoC updated weekly at MicroZed Chronicles.