Zynq UltraScale+ MPSoC ZCU106 VCU HDMI Single-Stream ROI TRD 2021.1

This page provides all the information related to VCU HDMI Single-Stream ROI TRD design for ZCU106.

Table of Contents

1 Overview

The primary goal of this VCU HDMI Single-Stream ROI design is to demonstrate the use of Deep learning Processor Unit(DPU) block for extracting the Region of Interest(ROI) from input video frames and to use this information to perform ROI based encoding using Video Codec Unit(VCU) encoder hard block present in Zynq UltraScale+ EV devices.

The design will serve as a platform to accelerate Deep Neural Network inference algorithms using DPU and demonstrate the ROI feature of VCU encoder. The design uses a Deep Convolutional Neural Network (CNN) named Densebox, running on DPU to extract ROI Information (e.g. ‘face’ in this case). The design will also use Vitis Video Analytics SDK (VVAS) framework to take leverage of its rich set of highly optimized and ready to use - Kernels and GStreamer plugins.

The Design will use Vivado IPI flow for building the hardware platform and Xilinx Yocto Petalinux flow for software design. It will use Xilinx IP and Software driver to demonstrate the capabilities of different components.

The Vitis platform will be created from the Vivado/PetaLInux build artifacts, and then using the Vitis acceleration flow will be used to insert the DPU into the platform to create the final bitstream.

The following figure shows serial pipeline use-case with enhanced ROI + face detection model on ZCU106. For detailed view of VVAS block, please refer to Section 1.3.3 - GStreamer Pipeline Flow.

Serial: Face detection with enhanced ROI on ZCU106.

The following figure shows streaming pipeline use-case with enhanced ROI + face detection model on ZCU106. For a detailed view of VVAS block, please refer to Section 1.3.3 - GStreamer Pipeline Flow.

Streaming: Face detection with enhanced ROI on ZCU106.

1.1 System Architecture

The following figure shows the block diagram of the VCU HDMI single-stream ROI design

1.2 Hardware Architecture

This section gives a detailed description of the blocks used in the hardware design. The functional block diagram of the design is shown in the below figure.

There are seven primary Sections in the design.

  • HDMI Capture Pipeline:

    • Captures video frame buffers from Capture source in 4K Resolution, NV12 Format

    •  Writes the buffers into DDR Memory with Frame Buffer Write IP

  • Multi-Scaler Block:

    • Reads the Video Buffers from DDR Memory

    • Scales down the buffer to the 640x360 size (suitable for DPU)

    • Converts the format from NV12 to BGR

    • Writes the downscaled buffer to DDR Memory

  • DPU Block: 

    • Reads the downscaled buffers from DDR Memory

    • Runs the Densebox algorithm to generate the ROI information for each frame buffer

    • Passes the ROI information to VCU Encoder

  • VCU Encoder: 

    • Reads the 4K NV12 Buffer from DDR Memory

    • Receives the ROI metadata from DPU IP

    • Encodes the video buffers based on the ROI Information

    • Finally writes the encoded stream to DDR Memory

  • PS GEM:

    • Reads the Encoder stream from DDR Memory

    • Stream-out the encoded stream via Ethernet

  • VCU Decoder:

    • Decodes the received encoded frame and writes to memory

  • HDMI-Tx:

    • Displays the decoded frames on HDMI Display

This design supports the following video interfaces:

Sources

  • HDMI-Rx capture pipeline implemented in the PL

  • File source (SD card, USB storage, SATA hard disk)

  • Stream-In from network or internet

Sinks

  • HDMI-Tx display pipeline implemented in the PL

  • Stream-out on network or internet

VCU Codec

  • Video Encoder/Decoder capability using VCU hard block in PL 

  • H.264/H.265 encoding

  • Encoder/decoder parameter configuration using OMX interface

DPU

Zynq DPU IP

Streaming Interfaces

1G Ethernet PS GEM

Video Format

NV12

Supported Resolution

4Kp30
1080p30

1.3 VCU ROI Software

1.3.1 Vitis Video Analytics SDK (VVAS)

VVAS is being developed to provide easy to use and scalable framework using which users will be able to build their solutions on Xilinx FPGA. VVAS provides infrastructure that will be covering a wide variety of applications in Embedded, Vision, Datacenter, Machine Learning, Automotive and many other domains.

VVAS Provides a set of generic framework plugins that abstracts the complexities of writing GStreamer plugin. These framework plugins interacts with the kernel libraries through a simple VVAS kernel interface. Using this VVAS Kernel interface, user can easily integrate and test their kernels in GStreamer framework.

VVAS also provides a rich set of highly optimized, ready to use, Kernels and GStreamer plugins, like video encoder, video decoder, multiscalar, ML, bounding box cropping etc. so that users can create his/her applications in very short span of time. VVAS will also provide the infrastructure needed to bridge the gap between Edge and Cloud solutions.

In VCU HDMI Single stream RoI design, we’ve used the following VVAS plugins:

  • ivas_xfilter: The ivas_xfilter efficiently works with hard-kernel/soft-kernel/software (user-space) acceleration software library types. It can operate in Passthrough/in-place/transform mode. In single-stream RoI design, it was used as in-place mode so that the acceleration software library can alter the input buffer.

  • ivas_xmetaaffixer: It is used to scale the incoming metadata information for the different resolutions, Where metadata received on the master sink pad is scaled in relation to the resolution output slave pads.

  • ivas_xroigen: This plug-in generates ROI metadata information, which is expected by GStreamer OMX encoder plug-ins to encode raw frames with the desired quality parameters (QP) values/ level for specified ROIs.

Refer to VVAS document for more detail on VVAS.

VVAS Top-level Block diagram

1.3.2 Deep Learning Processor Unit (DPU)

DPU is a programmable engine optimized for deep neural networks. It is a group of parameterizable IP cores pre-implemented on the hardware with no place and route required. The DPU is released with the Vitis AI specialized instruction set, allowing efficient implementation of many deep learning networks.

Refer to DPU IP PG338 and UG1354 to know more details on DPU.

The following figure shows the DPU Top-Level Block Diagram.

DPU Top-level Block Diagram

PE - Processing Engine, DPU - Deep Learning Processor Unit, APU - Application Processing Unit

The DPU IP can be implemented in the programmable logic (PL) of the selected Zynq® UltraScale+™ MPSoC device with direct connections to the processing system (PS). The DPU requires instructions to implement a neural network and accessible memory locations for input images as well as temporary and output data. A program running on the Application Processing Unit (APU) is also required to service interrupts and coordinate data transfers.

The following figure shows the sequence of operations performed on the DPU device.

The following sequence of steps are performed to access and run face detection using the DPU device:

  1. DPU device is initialized

  2. Instantiate a DPU Task from DPU Kernel and allocate corresponding DPU memory buffer

  3. Set the input image to created DPU task

  4. Run the DPU task to find the faces from the input image

  5. DPU device is uninitialized

1.3.3 GStreamer Pipelines Flow

The GStreamer plugin demonstrates the DPU capabilities with Xilinx VCU encoder’s ROI (Region of Interest) feature. The plugin will detect ROI (i.e. face co-ordinates) from input frames using DPU IP and pass the detected ROI information to the Xilinx VCU encoder. The following figure shows the data flow for GStreamer pipeline of stream-out use case.

Block Diagram of Stream-out Pipeline

fd = v4l2 frame data, fd' = DPU compatible frame data

As shown in the above figure, the stream-out GStreamer pipeline performs the below list of operations:

  1. v4l2src captures the data from HDMI-Rx in NV12 format and passes it to the tee element, which will split the input stream to metaaffixer and preprocessor elements.

  2. preprocessor (v4l2convert GStreamer plugin) will scale-down the input frame resolution to 640x360 and convert the data into BGR format as per the input requirement of DPU.

  3. 360p BGR frame will be provided to DPU IP (via xfilter plugin) as an input to find ROI (i.e. face co-ordinates)

  4. Extracted ROI information will be passed to metaaffixer along with the original capture stream(via tee), which will embed ROI metadata with the original stream.

  5. ROI Generator will generate ROI SEI events in the stream based on ROI metadata and it is given to VCU encoder, which will encode the input data by encoding ROI regions with high quality as compared to non-ROI region using received ROI information

  6. Stream-out the encoded data using RTP protocol

The following figure shows the data flow for the GStreamer pipeline of stream-in use case.

Block Diagram of Stream-in Pipeline

fd = VCU decoded frame data

As shown in the above figure, the stream-in GStreamer pipeline performs the below list of operations:

  1. Stream-in the encoded data using RTP protocol

  2. The Xilinx VCU decoder will decode the data

  3. Display the decoded data on HDMI-Tx display

1.4 Software Tools and System Requirements

Hardware

Required:

  • ZCU106 evaluation board rev 1.0 with power cable

  • Monitor with HDMI input supporting 3840x2160 resolution or 1920x1080 resolution (e.g. LG 27UD88, Samsung LU28ES90DS/XL)

  • HDMI 2.0 certified cable

  • Class-10 SD card

  • HDMI Receiver

  • Ethernet cable

Optional:

  • USB pen drive formatted with the FAT32 file system and hub

  • SATA drive formatted with the FAT32 file system, external power supply, and data cable

Software Tools

Required:

Download, Installation, and Licensing of Vivado Design Suite 2021.1

The Vivado Design Suite User Guide explains how to download and install the Vivado® Design Suite tools, which include the Vivado Integrated Design Environment (IDE), High-Level Synthesis tool, and System Generator for DSP. This guide also provides information about licensing and administering evaluation and full copies of Xilinx design tools and intellectual property (IP) products. The Vivado Design Suite can be downloaded from here.

LogiCORE IP Licensing

The following IP cores require a license to build the design.

  • Video Mixer- Included with Vivado - PG243

  • Video PHY Controller - Included with Vivado - PG230

  • HDMI-Rx/Tx Subsystem - Purchase license (Hardware evaluation available) - PG235 & PG236

  • Video Processing Subsystem (VPSS) - Included with Vivado - PG231

To obtain the LogiCORE IP license, please visit the respective IP product page and get the license.

The below table provides the performance information:

Resolution

FPS Achieved

Resolution

FPS Achieved

4Kp30

30

1080p30

30

1.5 Board Setup

The below section will provide the information on the ZCU106 board setup for running ROI design.

  1. Connect the Micro USB cable into the ZCU106 Board Micro USB port J83, and the other end into an open USB port on the host PC. This cable is used for UART over USB communication.

  2. Insert the SD card with the images copied into the SD card slot J100. Please find here how to prepare the SD card for a specific design.

  3. Set the SW6 switches as shown in the below Figure. This configures the boot settings to boot from SD.

  4. Connect 12V Power to the ZCU106 6-Pin Molex connector

  5. Connect one end of HDMI cable to the board’s P7 stacked HDMI connector (lower port) and another end to HDMI source.

  6. Connect one end of HDMI cable to the board’s P7 stacked HDMI connector (upper port) and another end to the HDMI monitor

  7. For a USB storage device, connect the USB hub along with the mouse. (Optional)

  8. For SATA storage device, connect SATA data cable to SATA 3.0 port. (Optional)

  9. Set up a terminal session between a PC COM port and the serial port on the evaluation board (See the Determine which COM to use to access the USB serial port on the ZCU106 board for more details).

  10. Copy the VCU HDMI single stream ROI images into the SD card and insert the SD card on the board

  11. The below images will show how to connect interfaces on the ZCU106 board

1.6 Run Flow

The VCU HDMI Single-stream ROI TRD package is released with the source code, Vivado project, Petalinux BSP, and SD card image that enables the user to run the demonstration. It also includes the binaries necessary to configure and boot the ZCU106 board. Prior to running the steps mentioned in this wiki page, download the VCU ROI TRD package and extract its contents to a directory referred to as $TRD_HOME which is the home directory.

Refer below link to download the VCU HDMI single-stream ROI TRD package.

  • Zynq UltraScale+ MPSoC VCU HDMI single-stream RoI 2021.1 Download zip

TRD package contents are placed in the following directory structure.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 rdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1/ ├── apu │   └── vcu_petalinux_bsp │   └── xilinx-vcu-single-stream-roi-zcu106-v2021.1-final.bsp ├── dpu │   ├── 0001-Added-ZCU106-configuration-to-support-DPU-in-ZCU106.patch │   ├── dpu_conf.vh │   └── vitis_platform │   └── zcu106_dpu ├── image │   ├── bootfiles │   │   ├── bl31.elf │   │   ├── linux.bif │   │   ├── pmufw.elf │   │   ├── system.bit │   │   ├── system.dtb │   │   ├── u-boot.elf │   │   └── zynqmp_fsbl.elf │   ├── README.txt │   ├── sd_card │   │   ├── boot │   │   └── root │   └── sd_card.img ├── pl │   ├── constrs │   │   └── vcu_roi.xdc │   ├── designs │   │   └── zcu106_ROI_HDMI │   ├── prebuild │   │   └── zcu106_ROI_HDMI_wrapper.xsa │   ├── README.md │   └── srcs │   ├── hdl │   └── ip ├── README.txt └── zcu106_vcu_single_stream_roi_trd_sources_and_licenses.tar.gz 18 directories, 17 files

The below snippet shows the directory structure of various binary files placed in the $TRD_HOME/image/sd_card/boot directory.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ├── image └──sd_card └──boot       ├── autostart.sh       ├── bd.hwh       ├── BOOT.BIN       ├── boot.scr       ├── dpu.xclbin       ├── Image       ├── setup.sh       ├── system.dtb       ├── vcu       │   └── configure_qos.sh       ├── vitis       │   └── densebox_640_360-zcu102_zcu104_kv260-r1.4.0.tar.gz       └── vvas       └── json       └── kernel_ML.json

1.6.1 Preparing the SD card

There are three ways to prepare the SD card for booting. Each method is detailed below.

Using ready to test image

  • sd_card.img is available at rdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1/image/sd_card.img

  • All the required Vitis packages are already installed in ready to test rdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1/image/sd_card.img

Using Pre-built images

  • To Create SD Card with two partitions: Boot(FAT32+Bootable) and Root(EXT4) Refer this Link.

  • Copy boot content fromrdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1/image/sd_card/boot to Boot partition in SD Card

  • Extract rootfs.ext4 from rdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1/image/sd_card/rootto Root partition in SD Card using

  • Boot the board with Flashed SD Card

Use the Output of the Build Flow

  • To Create SD Card with two partitions: Boot(FAT32+Bootable) and Root(EXT4) Refer this Link.

  • For Build Flow refer this steps and copy mentioned generated dpu build images bd.hwh BOOT.BIN boot.scr dpu.xclbin Image system.dtb into BOOT partition of the SD card and extract generated rootfs.ext4 into ROOT partition of SD Card

  • Copy the mentioned boot content vcu, vitis, vvas, autostart.sh, setup.sh from rdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1/image/sd_card/boot/ directory to Boot partition in SD Card

  • Boot the board with Flashed SD Card

  • All the required Densebox models are already available in rdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1/image/sd_card/boot/vitis directory which are installed automatically during 1st time boot. Please wait till the target setup completes and models are installed.

1.6.2 GStreamer Pipelines using mediasrcbin plugin

This section covers the GSreamer pipelines using mediasrcbin plugin for serial and streaming ROI use-cases. This mediasrcbin plugin is Xilinx specific plugin which is a bin element on top of v4l2src. It parses and configures the media graph of a media device automatically.

Before Running Gstremer pipelines run below modetest commands to set CRTC configurations.

1 2 modetest -D a00c0000.v_mix -s 39:3840x2160-30@AR24 -w 35:"alpha":0 & modetest -D a00c0000.v_mix -s 39:3840x2160-30@AR24

Serial:
→ v4l2convert → ivas_xfilter (DPU) →
Capture → tee -| |- ivas_xmetaaffixer→ ivas_xroigen → Encode → Decode → Display
→→→→→→→→→→→→→→→→→→→

  • Run the following gst-launch-1.0 command for serial pipeline

    • Serial Pipeline

1 gst-launch-1.0 mediasrcbin media-device=/dev/media0 v4l2src0::io-mode=4 ! video/x-raw, width=3840, height=2160, format=NV12, framerate=30/1 ! tee name=t0 t0. ! queue ! v4l2convert capture-io-mode=4 output-io-mode=5 ! video/x-raw, width=640, height=360, format=BGR ! queue ! ivas_xfilter kernels-config="/media/card/vvas/json/kernel_ML.json" ! scalem0.sink_master ivas_xmetaaffixer name=scalem0 t0. ! queue ! scalem0.sink_slave_0 scalem0.src_slave_0 ! queue ! ivas_xroigen roi-type=2 roi-qp-delta=-21 roi-max-num=50 insert-roi-sei=true ! queue ! omxh265enc gop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 qp-mode=roi ! video/x-h265, profile=main, alignment=au ! queue ! omxh265dec internal-entropy-buffers=5 low-latency=0 split-input=true ! queue max-size-bytes=0 ! fpsdisplaysink name=fpssink text-overlay=false 'video-sink=kmssink bus-id=a00c0000.v_mix hold-extra-sample=1 show-preroll=false sync=true draw-roi=true roi-rectangle-thickness=3 roi-rectangle-color=<0,90,240>' sync=true -v

Stream-out ( Server ):
→ v4l2convert → ivas_xfilter (DPU) →
Capture → tee -| |- ivas_xmetaaffixer → ivas_xroigen → Encode → Stream-out
→→→→→→→→→→→→→→→→→→→

  • Set IP address for server:

    1 ifconfig eth0 192.168.25.90
  • Run the following gst-launch-1.0 command for stream-out pipeline

    • Stream-out Pipeline

      1 gst-launch-1.0 mediasrcbin media-device=/dev/media0 v4l2src0::io-mode=4 ! video/x-raw, width=3840, height=2160, format=NV12, framerate=30/1 ! tee name=t0 t0. ! queue ! v4l2convert capture-io-mode=4 output-io-mode=5 ! video/x-raw, width=640, height=360, format=BGR ! queue ! ivas_xfilter kernels-config="/media/card/vvas/json/kernel_ML.json" ! scalem0.sink_master ivas_xmetaaffixer name=scalem0 t0. ! queue ! scalem0.sink_slave_0 scalem0.src_slave_0 ! queue ! ivas_xroigen roi-type=2 roi-qp-delta=-21 roi-max-num=50 insert-roi-sei=true ! queue ! omxh265enc gop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 qp-mode=roi ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5004
  • Here 192.168.25.89 is host/client IP address and 5004 is port no.

  • Make sure HDMI-Rx source should be configured to 4Kp30 mode

Stream-in ( Client ): Stream-in→ Decode → Display

  • Set IP address for the client:

    1 ifconfig eth0 192.168.25.89
  • Run the following gst-launch-1.0 command for stream-in pipeline where 5004 is port number

    • Stream-in Pipeline

      1 gst-launch-1.0 udpsrc port=5004 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux name=demux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 split-input=true ! queue max-size-bytes=0 ! fpsdisplaysink name=fpssink text-overlay=false 'video-sink=kmssink bus-id=a00c0000.v_mix hold-extra-sample=1 show-preroll=false sync=true draw-roi=true roi-rectangle-thickness=3 roi-rectangle-color=<0,90,240>' sync=true -v

For all 1080p30 commands replace width from 3840 to 1920 and height from 2160 to 1080

1.6.3 GStreamer Pipelines using v4l2src plugin

This section covers the GStreamer pipelines using v4l2src plugin for serial and streaming ROI use-cases. Before Running Gstremer pipelines run modetest command for HDMI-Tx and media-ctl command for HDMI-Rx as mentioned below.

Before Running Gstremer pipelines run below modetest commands to set CRTC configurations.

1 2 modetest -D a00c0000.v_mix -s 39:3840x2160-30@AR24 -w 35:"alpha":0 & modetest -D a00c0000.v_mix -s 39:3840x2160-30@AR24

Run below media-ctl commands to set scaler configuration when HDMI Input Source is configured for 4Kp30 resolution and RBG888_1x24 color format.

1 2 media-ctl -d /dev/media0 -V "\"a0040000.v_proc_ss\":0 [fmt:RBG888_1X24/3840x2160 field:none]" media-ctl -d /dev/media0 -V "\"a0040000.v_proc_ss\":1 [fmt:VYYUYY8_1X24/3840x2160 field:none]"
  • Make sure HDMI-Rx is configured for 4Kp resolution and RBG888_1x24 color format

  • v_proc_ss:0 should match the HDMI-Rx input source resolution and color format, and that the v_proc_ss:1 should match the Gstremer Pipeline resolution and color format

  • For more information on JSON configurations used in the following pipelines, please refer to VVAS JSON object members

Serial:
→ v4l2convert → ivas_xfilter (DPU) →
Capture → tee -| |- ivas_xmetaaffixer→ ivas_xroigen → Encode → Decode → Display
→→→→→→→→→→→→→→→→→→→

  • Run the following gst-launch-1.0 command for serial pipeline

    • Serial Pipeline

      1 gst-launch-1.0 v4l2src device=/dev/video0 io-mode=4 ! video/x-raw, width=3840, height=2160, format=NV12, framerate=30/1 ! tee name=t0 t0. ! queue ! v4l2convert capture-io-mode=4 output-io-mode=5 ! video/x-raw, width=640, height=360, format=BGR ! queue ! ivas_xfilter kernels-config="/media/card/vvas/json/kernel_ML.json" ! scalem0.sink_master ivas_xmetaaffixer name=scalem0 t0. ! queue ! scalem0.sink_slave_0 scalem0.src_slave_0 ! queue ! ivas_xroigen roi-type=2 roi-qp-delta=-21 roi-max-num=50 insert-roi-sei=true ! queue ! omxh265enc gop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 qp-mode=roi ! video/x-h265, profile=main, alignment=au ! queue ! omxh265dec internal-entropy-buffers=5 low-latency=0 split-input=true ! queue max-size-bytes=0 ! fpsdisplaysink name=fpssink text-overlay=false 'video-sink=kmssink bus-id=a00c0000.v_mix hold-extra-sample=1 show-preroll=false sync=true draw-roi=true roi-rectangle-thickness=3 roi-rectangle-color=<0,90,240>' sync=true -v

Stream-out ( Server ):
→ v4l2convert → ivas_xfilter (DPU) →
Capture → tee -| |- ivas_xmetaaffixer → ivas_xroigen → Encode → Stream-out
→→→→→→→→→→→→→→→→→→→

  • Set IP address for server:

    1 ifconfig eth0 192.168.25.90
  • Run the following gst-launch-1.0 command for stream-out pipeline

    • Stream-out Pipeline

      1 gst-launch-1.0 v4l2src device=/dev/video0 io-mode=4 ! video/x-raw, width=3840, height=2160, format=NV12, framerate=30/1 ! tee name=t0 t0. ! queue ! v4l2convert capture-io-mode=4 output-io-mode=5 ! video/x-raw, width=640, height=360, format=BGR ! queue ! ivas_xfilter kernels-config="/media/card/vvas/json/kernel_ML.json" ! scalem0.sink_master ivas_xmetaaffixer name=scalem0 t0. ! queue ! scalem0.sink_slave_0 scalem0.src_slave_0 ! queue ! ivas_xroigen roi-type=2 roi-qp-delta=-21 roi-max-num=50 insert-roi-sei=true ! queue ! omxh265enc gop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 qp-mode=roi ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5004
  • Here 192.168.25.89 is host/client IP address and 5004 is port no.

  • Make sure HDMI-Rx source should be configured to 4Kp30 mode

Stream-in ( Client ): Stream-in→ Decode → Display

  • Set IP address for the client:

    1 ifconfig eth0 192.168.25.89
  • Run the following gst-launch-1.0 command for stream-in pipeline where 5004 is port number

    • Stream-in Pipeline

      1 gst-launch-1.0 udpsrc port=5004 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux name=demux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 split-input=true ! queue max-size-bytes=0 ! fpsdisplaysink name=fpssink text-overlay=false 'video-sink=kmssink bus-id=a00c0000.v_mix hold-extra-sample=1 show-preroll=false sync=true draw-roi=true roi-rectangle-thickness=3 roi-rectangle-color=<0,90,240>' sync=true -v

For all 1080p30 commands replace width from 3840 to 1920 and height from 2160 to 1080

1.7 Build Flow

Refer below link to download the VCU HDMI single-stream ROI TRD package.

  • Zynq UltraScale+ MPSoC VCU HDMI single-stream RoI 2021.1 Download zip

Unzip the released package.

1 unzip </path/to/downloaded/zipfile>/rdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1.zip

The following tutorials assume that the $TRD_HOME environment variable is set as given below.

1 export TRD_HOME=</path/to/downloaded/zipfile>/rdf0617-zcu106-vcu-hdmi-single-stream-roi-2021-1

1.7.1 Hardware Build Flow

This section explains the steps to build the hardware platform and generate XSA using the Vivado tool.

Refer to the Vivado Design Suite User Guide: Using the Vivado IDE, UG893, for setting up the Vivado environment.

Refer to the vivado-release-notes-install-license(UG973) for installation.

Make sure that the necessary IP licenses are in place

On Linux:

  • Open a Linux terminal

  • Change directory to $TRD_HOME/pl folder

  • Source Vivado settings.sh

    1 2 bash source <path/to/Vivado-installer>/tool/Vivado/2021.1/settings64.sh
  • Run the following command to create the Vivado IPI project and invoke the GUI and generate XSA required for the platform

    1 vivado -source ./designs/zcu106_ROI_HDMI/project.tcl
  • The project.tcl script does the following

    • Creates project in the ../pl/build/zcu106_ROI_HDMI directory

    • Creates IPI Block design with platform interfaces

    • Runs Synthesis and Implementation

    • Builds bitstream with no accelerators

    • Export the HW to XSA (zcu106_ROI_HDMI_wrapper.xsa)

  • zcu106_ROI_HDMI_wrapper.xsa is stored at location $TRD_HOME/pl/build/zcu106_ROI_HDMI/zcu106_ROI_HDMI.xsa/

  • This XSA is used by Petalinux for platform creation and also by the Vitis Tool for DPU Kernel Integration.

After executing the script, the Vivado IPI block design comes up as shown in the below figure.

The Platform Setup tab has the settings and AXI Ports, as shown in below image

 

1.7.1.1 Platform Interfaces

The screenshots below show the platform interfaces that have been made available to the Vitis tool for linking the acceleration IP dynamically

In the case of this reference design, the DPU Kernel will be inserted.

 

After the DPU Kernel is integrated dynamically with the platform using Vitis Flow, the connections are as shown below

  • The DPU Data ports are connected to the HP0 Port(S_AXI_HP0_FPD) of PS.

  • The DPU Instruction port is connected to the S_AXI_HPC1 port of PS

  • The DPU S_AXI_Control port is connected to the M_AXI_HPM0_LPD port of PS through interconnect_axilite

  • The DPU interrupt is connected to the axi interrupt controller dynamically

1.7.2 Petalinux build Flow

This tutorial shows how to build the Linux image and boot image using the PetaLinux build tool.

PetaLinux Installation: Refer to the PetaLinux Tools Documentation (UG1144) for installation.

It is recommended to follow the build steps in sequence

  • Source Petalinux settings.sh

    1 2 bash source <path/to/petalinux-installer>/tool/petalinux-v2021.1-final/settings.sh
  • Create PetaLinux project

    1 2 cd $TRD_HOME/apu/vcu_petalinux_bsp petalinux-create -t project -s xilinx-vcu-single-stream-roi-zcu106-v2021.1-final.bsp
  • Configure the PetaLinux project

    1 2 cd xilinx-vcu-single-stream-roi-zcu106-v2021.1-final petalinux-config --silentconfig --get-hw-description=<Path to directory of XSA>
    • For e.g.

    • using the prebuild XSA

      1 petalinux-config --silentconfig --get-hw-description=$TRD_HOME/pl/prebuild/
    • using the XSA generated by running the Hardware project.tcl scripts

      1 petalinux-config --silentconfig --get-hw-description=$TRD_HOME/pl/build/zcu106_ROI_HDMI/zcu106_ROI_HDMI.xsa/
  • zcu106_ROI_HDMI_wrapper.xsa is available in directory zcu106_ROI_HDMI.xsa

  • Build the PetaLinux project

    1 petalinux-build

To build VVAS SDK externally, follow the steps mentioned here.

1.7.3 Prepare Build Artifacts for Platform Creation

To prepare artifacts required for ZCU106 Vitis platform creation, follow the below steps after petalinux build

  • Go to the petalinux build image directory

    1 cd $TRD_HOME/apu/vcu_petalinux_bsp/xilinx-vcu-roi-zcu106-v2021.1-final/images/linux
  • Create linux.bif file as below in images/linux directory. linux.bif file is required to create ZCU106 Vitis platform which has information related to boot components. After zcu106 vitis platform creation this linux.bif file will be part of the platform, which is required to build DPU and generate final BOOT.BIN

    1 2 3 4 5 6 7 8 9 10 /* linux */ the_ROM_image: { [bootloader, destination_cpu = a53-0] <zynqmp_fsbl.elf> [pmufw_image] <pmufw.elf> [destination_device=pl] <bitstream> [destination_cpu=a53-0, exception_level=el-3, trustzone] <bl31.elf> [destination_cpu=a53-0, load=0x00100000] <system.dtb> [destination_cpu=a53-0, exception_level=el-2] <u-boot.elf> }
  • Copy generated images into boot and image directory by following the below commands. Use created linux.bif to copy into boot directory.

    1 2 3 4 mkdir boot image cp linux.bif bl31.elf pmufw.elf system.dtb u-boot.elf zynqmp_fsbl.elf boot/. cp boot.scr system.dtb image/.

1.7.4 ZCU106 Platform Creation

This section shows how to create a Vitis acceleration platform for the zcu106 using the Vitis IDE.

  • Ready to use vitis platform is available at $TRD_HOME/dpu/vitis_platform/zcu106_dpu

  • It is recommended to follow the build steps in sequence

1 2 source <vitis install path>/Vitis/2021.1/settings64.sh vitis &

Choose project workspace and click on Launch to begin

 

User can create Workspace in any directory. For example $TRD_HOME/vitis_workspace

Launch the New Platform Project dialog box using the following step:
Go to File > New > Platform Project

Provide a project name “zcu106_dpu“ in the Platform project name field and click Next as shown in the below figure

In the Platform Project dialog box, choose Create a new platform from hardware specification (XSA) and provide the XSA path

  • For prebuild XSA use $TRD_HOME/pl/prebuild/ path

  • For generated XSA using vivado build use $TRD_HOME/pl/build/zcu106_ROI_HDMI/zcu106_ROI_HDMI_wrapper.xsa path

Use the below setting under Software Specification

  1. Select linux as the operating system, psu_cortexa53 as a processor, and 64-bit architecture to create the platform

  2. Uncheck the box for Generate boot components

  3. Click Finish to create your platform project

 

Go to zcu106_dpu > psu_cortexa53 > linux on psu_cortexa53 and add path of required files in Domain: linux_domain

Give the bif file, boot directory and image path as shown in below figure

  • Use <Path to Petalinux Project>/xilinx-vcu-roi-zcu106-v2021.1-final/images/linux/boot/linux.bif for Bif File

  • Use <Path to Petalinux Project>/xilinx-vcu-roi-zcu106-v2021.1-final/images/linux/boot/ for Boot Components Directory

  • Use <Path to Petalinux Project>/xilinx-vcu-roi-zcu106-v2021.1-final/images/linux/image for FAT32 Partition Directory

Right click on the zcu106_dpu project in the Explorer tab and click on Build Project to generate the platform as shown in the below figure
The Console tab shows the status of the platform generation.

As shown in below image, zcu106_dpu.xpfm is created under zcu106_dpu > export > zcu106_dpu > zcu106_dpu.xpfm

1.7.5 DPU Build

  • It is recommended to follow the build steps in sequence

  • Make sure parted utility is installed for DPU build

  • Clone the Vitis-AI repository and apply patch to add support of ZCU106 in Vitis DPU TRD.

1 2 3 4 git clone https://github.com/Xilinx/Vitis-AI.git cd Vitis-AI git checkout tags/v1.4 git am $TRD_HOME/dpu/0001-Added-ZCU106-configuration-to-support-DPU-in-ZCU106.patch

The following tutorials assume that the DPU_TRD_HOME environment variable is set as given below.

1 export DPU_TRD_HOME=<Vitis AI path>/dsa/DPU-TRD

The following tutorials assume that the Vitis and XRT environment variable is set as given below.

  • Open a Linux terminal. Set the Linux as Bash mode.

    1 2 source <vitis install path>/Vitis/2021.1/settings64.sh source <xrt install path>/packages/setenv.sh
  • The default setting of DPU is B4096 with RAM_USAGE_LOW, CHANNEL_AUGMENTATION_ENABLE, DWCV_ENABLE, POOL_AVG_ENABLE, RELU_LEAKYRELU_RELU6. Read the $DPU_TRD_HOME/prj/Vitis/dpu_conf.vh file to get the details of DPU. You can get all the configurations from PG338. Modify the $DPU_TRD_HOME/prj/Vitis/dpu_conf.vh file can change the default settings.

    • Copy dpu_conf.vh file to enable the URAM for ZCU106 DPU Build, The DPU will replace the bram to the uram.

      1 cp $TRD_HOME/dpu/dpu_conf.vh $DPU_TRD_HOME/prj/Vitis/dpu_conf.vh
  • Build the hardware design

    1 2 3 4 cd $DPU_TRD_HOME/prj/Vitis export EDGE_COMMON_SW=<Path to Petalinux Project>/xilinx-vcu-roi-zcu106-v2021.1-final/images/linux/ export SDX_PLATFORM=<Path to Vitis Workspace>/zcu106_dpu/export/zcu106_dpu/zcu106_dpu.xpfm make KERNEL=DPU DEVICE=zcu106_dpu
  • Generated SD card files are in $DPU_TRD_HOME/prj/Vitis/binary_container_1/sd_card(SD card Format)

  • Copy generated dpu build images from $DPU_TRD_HOME directory to $TRD_HOME directory

    1 2 3 cd $DPU_TRD_HOME/prj/Vitis/binary_container_1/sd_card cp BOOT.BIN boot.scr dpu.xclbin Image system.dtb $TRD_HOME/image/sd_card/boot cp $TRD_HOME/apu/vcu_petalinux_bsp/xilinx-vcu-roi-zcu106-v2021.1-final/images/linux/rootfs.ext4 $TRD_HOME/image/sd_card/root

2 Other Information

2.1 Known Issues

2.2 Limitations

3 Appendix A - HDMI-Rx Link-up

  • HDMI source can be locked to any resolution. To check the link status, resolution and video node of the HDMI input source, run below media-ctl command.

    1 media-ctl -p -d /dev/media0
  • To check the link status, resolution and video node of the HDMI input source, run below media-ctl command.

    1 media-ctl -p -d /dev/media0

When HDMI source is connected to 4Kp30 resolution, it shows:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 root@vcu_roi_trd-2021_1:/media/card# media-ctl -p -d /dev/media0 Media controller API version 5.10.0 Media device information ------------------------ driver xilinx-video model Xilinx Video Composite Device serial bus info hw revision 0x0 driver version 5.10.0 Device topology - entity 1: vcap_hdmi_input_v_proc_ss_0 out (1 pad, 1 link) type Node subtype V4L flags 0 device node name /dev/video0 -----> Video node for HDMI-Rx source pad0: Sink <- "a0040000.v_proc_ss":1 [ENABLED] - entity 5: a0040000.v_proc_ss (2 pads, 2 links) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev0 pad0: Sink [fmt:RBG888_1X24/3840x2160 field:none] <- "a0000000.v_hdmi_rx_ss":0 [ENABLED] pad1: Source [fmt:VYYUYY8_1X24/3840x2160 field:none] -> "vcap_hdmi_input_v_proc_ss_0 out":0 [ENABLED] - entity 8: a0000000.v_hdmi_rx_ss (1 pad, 1 link) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev1 pad0: Source [fmt:RBG888_1X24/3840x2160 field:none] [dv.caps:BT.656/1120 min:0x0@25000000 max:4096x2160@297000000 stds:CEA-861,DMT,CVT,GTF caps:progressive,reduced-blanking,custom] [dv.detect:BT.656/1120 3840x2160p30 (4400x2250) stds:CEA-861 flags:CE-video] -----> Resolution and Frame-rate of HDMI-Rx source -> "a0040000.v_proc_ss":0 [ENABLED]

Check resolution and frame-rate of dv.detect under v_hdmi_rx_ss node

When the HDMI source is not connected, it shows:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 root@vcu_roi_trd-2021_1:/media/card# media-ctl -p -d /dev/media0 Media controller API version 5.10.0 Media device information ------------------------ driver xilinx-video model Xilinx Video Composite Device serial bus info hw revision 0x0 driver version 5.10.0 Device topology - entity 1: vcap_hdmi_input_v_proc_ss_0 out (1 pad, 1 link) type Node subtype V4L flags 0 device node name /dev/video0 -----> Video node for HDMI-Rx source pad0: Sink <- "a0040000.v_proc_ss":1 [ENABLED] - entity 5: a0040000.v_proc_ss (2 pads, 2 links) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev0 pad0: Sink [fmt:RBG888_1X24/3840x2160 field:none] <- "a0000000.v_hdmi_rx_ss":0 [ENABLED] pad1: Source [fmt:VYYUYY8_1X24/3840x2160 field:none] -> "vcap_hdmi_input_v_proc_ss_0 out":0 [ENABLED] - entity 8: a0000000.v_hdmi_rx_ss (1 pad, 1 link) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev1 pad0: Source [fmt:RBG888_1X24/3840x2160 field:none] [dv.caps:BT.656/1120 min:0x0@25000000 max:4096x2160@297000000 stds:CEA-861,DMT,CVT,GTF caps:progressive,reduced-blanking,custom] [dv.query:no-link] ------> HDMI-RX link status -> "a0040000.v_proc_ss":0 [ENABLED]

Here dv.query:no-link under v_hdmi_rx_ss node shows HDMI-Rx source is not connected or HDMI-Rx source is not active(Try waking up the device by pressing a key on remote).