Heterogeneous SoCs like the Zynq and Zynq MPSoC are ideal for image processing as they allow the implementation of the image processing pipeline in the Programmable Logic (PL). While the Processing System (PS) can implement the higher levels of the algorithm and decision making.

Typical image processing pipeline

The key to implementing a high performance image processing system is receiving the data directly into the PL.

To interface our chosen image sensor or cameras with our SoC/FPGA, there are a range of interface standards from HDMI to LVDS and Parallel.

One very popular interface is the Mobile Industry Processor Interface (MIPI) Camera Serial Interface issue 2, or CSI-2 as it is more commonly called.

CSI-2 is a high speed serial protocol which is uni-directional from the source to the sink.

With CSI-2 implementations, each link will consist of at least one clock and a data lane. Data communicated down data lanes are double data rate, e.g. its values change on both edge of the clock.

When using a DPhy Implementation, this enables data rates of up to 2.5 GBps per lane, or 10 GBps if four lanes are used — although data rates supported by FPGAs are a little lower than this.

Of course for many applications, we need to be able to control the camera / sensor. Using MIPI this is achieved via the Camera Command Interface, a bi-directional link based on I2C. The use of I2C makes it very easy to interface with the Zynq and Zynq MPSoC with either a PS I2C or a AXI-based I2C controller.

Due to the complexity and licensing of the MIPI CSI-2 standard, most MIPI implementations use a IP core such as the one from Xilinx or Northwest Logic. To integrate the IP core easily within image processing pipelines, these Intellectual Property (IP) cores should accept or output image data using AXI Stream.

Configuring the MIPI CSi-2 IP core

When we implement a MIPI CSI-2 solution in our FPGA, we will most often be using a DPhy-based solution. Even if a IP core is used for the higher levels of the protocol, the DPhy is normally configured by the developer as that is where the line rate, clocking and pin out are defined. This DPhy block is then connected to the MIPI IP Core.

Depending upon if we are implementing the MIPI solution in a Zynq (or seven series FPGA) or a Zynq MPSoC (UltraScale+ FPGA), the physical elements of the DPhy will be different.

The MPSoC includes DPhy support directly within the IO resources. However, the Zynq does not as such we have two options for the Zynq depending upon the data rate.

DPhy-compatible receiver for seven series device

We can implement an external resistor-based solution which implements a DPhy compatible solution. Alternatively, we can use an external Phy such as the MC2002. Both solutions are compatible with the generated DPhy.

UltraScale+ DPhy IBUF

We can, of course, have multiple MIPI interfaces within one FPGA bank. We must therefore ensure we correctly configure the DPHy’s correctly as either a master or slave core.

The difference is whether the DPHy include the PLLs necessary to generate the data rate clocks (master) or if they are supplied by another MIPI IP Core / DPhy (slave).

In the UltraScale architecture, each IO bank contains physical layer blocks which contain Clock Management Tile (CMT). A CMT contains one Mixed Mode Clock Manager (MMCM) and two Phase Locked Loops (PLL).

Sharing the line rate clock between a master and slave means the line rates must be the same for transmission.

It is these PLLs which are used to create the line rate clocks; we therefore need careful planning on which MIPI interfaces are in which bank.

Configuring the DPhy is a pretty straightforward task, consisting of three tabs.

Configuring the overall DPHY

Configuring the clocking

Allocating the IO Pins

When it comes to MPSoC boards, the Ultra96 includes two MIPI CSI-2 interfaces on its High Speed Header. The SDSoC license, which comes with the Ultra96, also provides the licenses to implement the MIPI CSI-2 IP cores within Vivado.

Ultra96 high speed header CSI-2 pin out

These pins are connected to Bank 65 of the PL enabling us to implement a MIPI interface if necessary for your application. There is a DSI interface provided for MIPI-based displays as well.

If after this you have configured the IP cores and DPhys and still have issues, there are a few elements you can check. These include:

Check the line rate and pin out are correct.

Check clocks are at the right frequency for the data rate.

Check any ECC and CRC errors the IP Cores report.

Check any timing warnings which may be reported by the cores.

Check the MIPI clock configuration (continuous or non continuous).

Check you are working with the correct virtual channel.

In the worse case break out a high speed oscilloscope and verify the timings against the MIPI specification.

In upcoming blogs and Hackster projects, we will look at how we can implement a MIPI-based image processing pipeline in greater depth, but I wanted to first give an introduction to MIPI and the different ways we interface with it in our FPGAs.

**Update you can find the first of the MIPI Hackster projects here

See My FPGA / SoC Projects: Adam Taylor on Hackster.io

Get the Code: ATaylorCEngFIET (Adam Taylor)

Access the MicroZed Chronicles Archives with over 250 articles on the Zynq / Zynq MpSoC updated weekly at MicroZed Chronicles.