Zynq UltraScale+ MPSoC VCU Single Sensor ROI 2020.1
This page provides all the information related to VCU Single Sensor ROI design.
Table of Contents
- 1 1 Overview
- 2 2 Other Information
- 3 3 Appendix A - Input Configuration File (input.cfg)
- 4 4 Appendix B - MIPI-Rx/HDMI-Tx Link-up and GStreamer Commands
1 Overview
The primary goal of this VCU Single Sensor ROI design is to demonstrate the use of DPU (Deep learning Processor Unit) block for extracting the ROI (Region of Interest) from input video frames and to use this information to perform ROI based encoding using VCU (Video Codec Unit) encoder hard block present in Zynq UltraScale+ EV devices.
The design will serve as a platform to accelerate Deep Neural Network inference algorithms using DPU and demonstrate the ROI feature of VCU encoder. The design uses a Deep Convolutional Neural Network (CNN) named Densebox, running on DPU to extract ROI Information (e.g. ‘face’ in this case).
The Design will use Vivado IPI flow for building the hardware platform and Xilinx Yocto Petalinux flow for software design. It will use Xilinx IP and Software driver to demonstrate the capabilities of different components. The Vitis platform will be created from the Vivado/PetaLInux build artifacts, and then Vitis acceleration flow will be used to insert the DPU into the platform to create the final bitstream.
The following figure shows one of the use cases (serial pipeline) with face detection and enhanced ROI quality on ZCU106.
Serial: Face detection and enhanced ROI quality on ZCU106
The following figure shows one of the use cases (streaming pipeline) with face detection and enhanced ROI quality on ZCU106.
Streaming: Face detection and enhanced ROI quality on ZCU106
1.1 System Architecture
The following figure shows the system level diagram which includes the components of the evaluation board.
1.2 Hardware Architecture
This section gives a detailed description of the blocks used in the hardware design. The functional block diagram of the design is shown in the below figure.
There are seven primary Sections in the design.
MIPI Capture Pipeline:
Captures video frame buffers from Capture source in 4K Resolution, NV12 Format
Writes the buffers into DDR Memory with Frame Buffer Write IP
Multi-scaler Block:
Reads the Video Buffers from DDR Memory
Scales down the buffer to the 640x360 size (suitable for dpu)
Converts the format from NV12 to BGR
Writes the Down-scaled buffer to DDR Memory
DPU Block:
Reads the downscaled buffers from DDR Memory
Runs the Densebox algorithm to generate the ROI information for each frame buffer
Passes the ROI information to VCU Encoder
VCU Encoder:
Reads the 4K NV12 Buffer from DDR Memory
Receives the ROI metadata from DPU IP
Encodes the video buffers based on the ROI Information
Finally writes the encoded stream to DDR Memory
PS GEM:
Reads the Encoder stream from DDR Memory
Streams out the encoded stream via Ethernet
VCU Decoder:
Decodes the received encoded frame and writes to memory
HDMI-Tx:
Displays the decoded frames on HDMI Display
This design supports the following video interfaces:
Sources |
|
---|---|
Sinks |
|
VCU Codec |
|
DPU | |
Streaming Interfaces | 1G Ethernet PS GEM |
Video Format | NV12 |
Supported Resolution | 4Kp30 |
1.3 VCU ROI Software
1.3.1 GStreamer Pipeline
The GStreamer plugin demonstrates the DPU capabilities with Xilinx VCU encoder’s ROI(Region Of Interest) feature. The plugin will detect ROI(face co-ordinates) from input frames using DPU IP and pass the detected ROI information to the Xilinx VCU encoder. The following figure shows the data flow for GStreamer pipeline of stream-out use case.
Block Diagram of Stream-out Pipeline
fd = v4l2 frame data, fd' = DPU compatible frame data
As shown in the above figure, the stream-out GStreamer pipeline performs the below list of operations:
v4l2src captures the data from MIPI-Rx in NV12 format and pass to xlnxroivideo1detect GStreamer plugin
xlnxroivideo1detect GStreamer plugin will scale down to 640x360 resolution and convert the data into BGR format
640x360 BGR frame will be provided to DPU IP as an input to find ROI (i.e. face co-ordinates)
Extracted ROI information will be passed to VCU encoder
The encoder will encode the input data by encoding ROI regions with high quality as compared to non-ROI region using received ROI information
Stream-out the encoded data using RTP protocol
The following figure shows the data flow for the GStreamer pipeline of stream-in use cases.
Block Diagram of Stream-in Pipeline
fd = Gst-Omx Frame data
As shown in the above figure, the stream-in GStreamer pipeline performs the below list of operations:
Stream-in the encoded data using RTP protocol
The Xilinx VCU decoder will decode the data
Display the decoded data on HDMI-Tx display
The below figure shows the xlnxroivideo1detect GStreamer plugin data flow.
As shown in the above figure, our xlnxroivideo1detect GStreamer plugin will perform below the list of operations:
DPU is initialized
DPU GStreamer plugin receives the data frame from MIPI-Rx through a v4l2src plugin
Create the DPU task
Scale the input frame to 640x360 resolution using Xilinx Scaler IP
Convert the input frame data format from NV12 to BGR format using Xilinx Color Space Converter(CSC) soft IP
Prepare the OpenCV image using BGR data
Pass the intermediate OpenCV image to the DPU
Run the DPU task
Extract the ROI(face) co-ordinates from the DPU output
Map the detected face co-ordinates to the original input frame resolution
Fill the ROI metadata buffer using extracted ROI (face) co-ordinates
Pass the ROI metadata buffer and input NV12 frame data buffer to the Xilinx VCU encoder
De-initialize the DPU task
1.3.2 DPU(Deep Learning Processor Unit)
DPU is a programmable engine optimized for deep neural networks. It is a group of parameterizable IP cores pre-implemented on the hardware with no place and route required. The DPU is released with the Vitis AI specialized instruction set, allowing efficient implementation of many deep learning networks.
Refer to DPU IP PG338 and UG1354 to know more details on DPU.
The following figure shows the DPU Top-Level Block Diagram.
DPU Top-level Block Diagram
The DPU IP can be implemented in the programmable logic (PL) of the selected Zynq® UltraScale+™ MPSoC device with direct connections to the processing system (PS). The DPU requires instructions to implement a neural network and accessible memory locations for input images as well as temporary and output data. A program running on the application processing unit (APU) is also required to service interrupts and coordinate data transfers.
The following figure shows the sequence of operations performed on the DPU device.
The following sequence of steps are performed to access and run face detection using the DPU device:
DPU device is initialized
Instantiate a DPU Task from DPU Kernel and allocate corresponding DPU memory buffer
Set the input image to created DPU task
Run the DPU task to find the faces from the input image
DPU device is uninitialized
1.4 Software Tools and System Requirements
Hardware
Required:
ZCU106 evaluation board (rev C/D/E/F/1.0) with power cable
Monitor with HDMI input supporting 3840x2160 resolution or 1920x1080 resolution
HDMI cable 2.0 certified
Class-10 SD card
Ethernet cable
Optional:
USB pen drive formatted with the FAT32 file system and hub
SATA drive formatted with the FAT32 file system, external power supply, and data cable
Software Tools
Required:
Linux host machine for all tool flow tutorials (see UG1144 for detailed OS requirements)
PetaLinux Tools version 2020.1 (see UG1144 for installation instructions)
Git a distributed version control system
Serial terminal emulator e.g. teraterm
Download, Installation, and Licensing of Vivado Design Suite 2020.1
The Vivado Design Suite User Guide explains how to download and install the Vivado® Design Suite tools, which include the Vivado Integrated Design Environment (IDE), High-Level Synthesis tool, and System Generator for DSP. This guide also provides information about licensing and administering evaluation and full copies of Xilinx design tools and intellectual property (IP) products. The Vivado Design Suite can be downloaded from here.
LogiCORE IP Licensing
The following IP cores require a license to build the design.
Video Mixer- Included with Vivado - PG243
Video PHY Controller - Included with Vivado - PG203
HDMI-Rx/Tx Subsystem - Purchase license (Hardware evaluation available) - PG235 & PG236
Video Processing Subsystem (VPSS) - Included with Vivado - PG231
MIPI CSI Controller Subsystems (mipi_csi2_rx_subsystem) - Purchase license (Hardware evaluation available) - PG232
To obtain the LogiCORE IP license, please visit the respective IP product page and get the license.
AR# 44029 - Licensing - LogiCORE IP Core licensing questions
Compatibility
The reference design has been tested successfully with the following user-supplied components.
HDMI Monitor:
Make/Model | Resolutions |
---|---|
LG 27UD88 | 3840 x 2160 @ 30Hz |
Samsung LU28ES90DS/XL | 3840 x 2160 @ 30Hz |
Cable:
HDMI 2.0 compatible cable
The below table provides the performance information:
Resolution | FPS Achieved |
---|---|
4Kp30 | 27 - 30 |
1080p30 | 30 |
Above FPS are measured withgop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false qp-mode=roi
encoder parameters for AVC and HEVC.
1.5 Board Setup
The below section will provide the information on the ZCU106 board setup for running ROI design.
Connect the Micro USB cable into the ZCU106 Board Micro USB port J83, and the other end into an open USB port on the host PC. This cable is used for UART over USB communication.
Insert the SD card with the images copied into the SD card slot J100. Please find here how to prepare the SD card for a specific design.
Set the SW6 switches as shown in the below Figure. This configures the boot settings to boot from SD.
Connect 12V Power to the ZCU106 6-Pin Molex connector
Connect one end of HDMI cable to the board’s P7 stacked HDMI connector (upper port) and another end to HDMI monitor
For a USB storage device, connect the USB hub along with the mouse. (Optional)
For SATA storage device, connect SATA data cable to SATA 3.0 port. (Optional)
For MIPI CSI-2, Insert the LI-IMX274MIPI-FMC image sensor daughter card into the FMC0 connector and set VADJ to 1.2V
Important Note: VADJ on the FMC0 connector must be set to 1.2V. See FMC VADJ Voltage Settings for more information
Set up a terminal session between a PC COM port and the serial port on the evaluation board (See the Determine which COM to use to access the USB serial port on the ZCU106 board for more details)
Copy the VCU Single Sensor ROI TRD images into the SD card and insert the SD card on the board
The below images will show how to connect interfaces on the ZCU106 board
ZCU106 board connections
ZCU106 board connector slots
1.6 Run Flow
The VCU Single Sensor ROI package is released with the source code, Vivado project, Petalinux BSP, and SD card image that enables the user to run the demonstration. It also includes the binaries necessary to configure and boot the ZCU106 board. Prior to running the steps mentioned in this wiki page, download the VCU Single Sensor ROI package and extract its contents to a directory referred to as TRD_HOME
which is the home directory.
Refer below link to download the VCU Single Sensor ROI package.
TRD package contents are placed in the following directory structure.
rdf0428-zcu106-vcu-single-sensor-roi-2020-1
├── apu
│ └── vcu_petalinux_bsp
│ └── xilinx-vcu-roi-zcu106-v2020.1-final.bsp
├── dpu
│ └── 0001-Added-ZCU106-configuration-to-support-DPU-in-ZCU106.patch
├── image
│ ├── boot
│ │ ├── autostart.sh
│ │ ├── bd.hwh
│ │ ├── BOOT.BIN
│ │ ├── boot.scr
│ │ ├── config
│ │ ├── dpu.xclbin
│ │ ├── Image
│ │ ├── setup.sh
│ │ ├── system.dtb
│ │ ├── vcu
│ │ └── vitis
│ ├── root
│ │ └── rootfs.ext4
│ └── sd_card.img
├── pl
│ ├── constrs
│ │ ├── roi_async.xdc
│ │ └── vcu_roi.xdc
│ ├── designs
│ │ └── zcu106_ROI_MIPI
│ ├── prebuild
│ │ └── zcu106_ROI_MIPI_wrapper.xsa
│ ├── README.md
│ └── srcs
│ ├── hdl
│ └── ip
└── README.txt
The below snippet shows the directory structure of various configuration files which are input for vcu_gst_app to run display and streaming use cases. All these configurations files are placed in the $TRD_HOME/image/boot/config
directory.
└── config
├── 1080p30
│ ├── Display
│ ├── Stream-in
│ └── Stream-out
├── 4kp30
│ ├── Display
│ ├── Stream-in
│ └── Stream-out
└── input.cfg
1.6.1 Preparing the SD card
There are three ways to prepare the SD card for booting. Each method is detailed below.
Using ready to test image
Flash SD Card with
sd_card.img
using Etcher or Win32DiskImagerBoot the board with Flashed SD Card
Using Pre-built images
To Create SD Card with two partitions: Boot(FAT32+Bootable) and Root(EXT4) Refer this Link.
Copy
boot
content fromrdf0428-zcu106-vcu-single-sensor-roi-2020-1/image/boot
to Boot partition in SD CardExtract
rootfs.ext4
fromrdf0428-zcu106-vcu-single-sensor-roi-2020-1/image/root
to Root partition in SD CardBoot the board with Flashed SD Card
Use the Output of the Build Flow
To Create SD Card with two partitions: Boot(FAT32+Bootable) and Root(EXT4) Refer this Link.
For Build Flow refer this steps and copy mentioned generated dpu build images
bd.hwh BOOT.BIN boot.scr dpu.xclbin Image system.dtb
into BOOT partition of the SD card and extract generatedrootfs.ext4
into ROOT partition of SD CardCopy the mentioned
boot
contentconfig, vitis, autostart.sh, setup.sh
fromrdf0428-zcu106-vcu-single-sensor-roi-2020-1/image/boot
directory to Boot partition in SD CardBoot the board with Flashed SD Card
1.6.2 Using GStreamer Application (vcu_gst_app)
The vcu_gst_app
is a command-line multi-threaded Linux application. The command-line application requires an input configuration file (input.cfg
) to be provided in the plain text.
After the board gets booted, the display screen turns to the blue screen; which means it is ready to test. Otherwise before the execution of vcu_gst_app
, manually run modetest
command to set CRTC configurations.
$ modetest -D a00c0000.v_mix -s 39:3840x2160-30@AR24 &
Display: Capture → (ROI) → Encode → Decode → Display
4Kp30 HEVC Display Pipeline execution using
vcu_gst_app
Stream-out: ( Server )
Set IP address for server:
4Kp30 HEVC Stream-out Pipeline execution using
vcu_gst_app
Stream-in: ( Client )
Set IP address for the client:
4Kp30 HEVC Stream-in Pipeline execution using
vcu_gst_app
1.7 Build Flow
Refer below link to download the VCU Single Sensor ROI package.
Unzip the released package.
The following tutorials assume that the $TRD_HOME
environment variable is set as given below.
1.7.1 HW build Flow
This section explains the steps to build the hardware platform and generate XSA using the Vivado tool.
Refer to the Vivado Design Suite User Guide: Using the Vivado IDE, UG893, for setting up the Vivado environment.
Refer to the vivado-release-notes-install-license(UG973) for installation.
Make sure that the necessary IP licenses are in place
ON LINUX:
Open a Linux terminal
Change directory to
$TRD_HOME/pl
folderSource Vivado
settings.sh
Run the following command to create the Vivado IPI project and invoke the GUI and generate XSA required for the platform
The project.tcl script does the following
Creates project
Creates IPI Block design with platform interfaces
Runs Synthesis and Implementation
Builds bitstream with no accelerators
Export the HW to XSA
zcu106_ROI_MIPI_wrapper.xsa is created
stored in the directory ../zcu106_ROI_MIPI.xsa/
at location
$TRD_HOME/pl/build/zcu106_ROI_MIPI/zcu106_ROI_MIPI.xsa/
This XSA is used by Petalinux for platform creation and also by the Vitis Tool for DPU Kernel Integration.
After executing the script, the Vivado IPI block design comes up.
1.7.1.1 Platform Interfaces
The screenshots below show the platform interfaces that have been made available to the Vitis tool for linking in acceleration IP.
In the case of this reference design, the DPU Kernel will be inserted.
After the DPU Kernel is integrated dynamically with the platform using Vitis Flow, the connections are as shown below
The DPU Data ports are connected to the HP0 Port(S_AXI_HP0_FPD) of PS .
The DPU Instruction port is connected to the S_AXI_LPD port of PS through axi_interconnect_lpd
The DPU S_AXI_Control port is connected to the M_AXI_HPM0_LPD port of PS through interconnect_axilite
The DPU interrupt is connected to the axi interrupt controller dynamically
1.7.2 Petalinux build Flow
This tutorial shows how to build the Linux image and boot image using the PetaLinux build tool.
PetaLinux Installation: Refer to the PetaLinux Tools Documentation (UG1144) for installation.
Source Petalinux
settings.sh
Create PetaLinux project
Configure the PetaLinux project
For e.g.
using the prebuild XSA
using the XSA generated by running the Hardware
project.tcl
scripts
Build the PetaLinux project
1.7.3 Prepare Build Artifacts for Platform Creation
To prepare artifacts required for ZCU106 Vitis platform creation, follow below steps after petalinux build
Go to the petalinux build image directory
Create
linux.bif
file as below inimages/linux
directory.linux.bif
file is required to create ZCU106 Vitis platform which has information related to boot components. After zcu106 Vitis platform creation, thislinux.bif
file will be part of platform, which is required to build DPU and generate finalBOOT.BIN
.Copy generated images into boot and image directory by following below commands. Use created
linux.bif
to copy into boot directory.
1.7.4 ZCU106 Platform Creation
This section shows how to create a Vitis acceleration platform for the zcu106 using the Vitis IDE.
Choose project workspace and click on Launch to begin
Launch the New Platform Project dialog box using following step:
Go to File > New > Platform Project
Provide a project name “zcu106_dpu“ in the Platform project name field and click Next as shown in below figure
In the Platform Project dialog box, choose Create a new platform from hardware specification (XSA) and provide the XSA path
For prebuild XSA, use
$TRD_HOME/pl/prebuild/zcu106_ROI_MIPI_wrapper.xsa
pathFor generated XSA using Vivado build, use
$TRD_HOME/pl/build/zcu106_ROI_MIPI/zcu106_ROI_MIPI.xsa/zcu106_ROI_MIPI_wrapper.xsa
path
Use below settings under Software Specification
Select
linux
as the operating system,psu_cortexa53
as processor, and64-bit
architecture to create the platformUncheck the box for Generate boot components
Click Finish to create your platform project
Go to zcu106_dpu > linux on psu_cortexa53 and add path of required files in Domain: linux_domain
Give the bif file, boot directory, image path, and rootfs path as shown in below figure
Use
<Path to Petalinux Project>/xilinx-vcu-roi-zcu106-v2020.1-final/images/linux/boot/linux.bif
for Bif FileUse
<Path to Petalinux Project>/xilinx-vcu-roi-zcu106-v2020.1-final/images/linux/boot/
for Boot Components DirectoryUse
<Path to Petalinux Project>/xilinx-vcu-roi-zcu106-v2020.1-final/images/linux/image
for Linux Image DirectoryUse
<Path to Petalinux Project>/xilinx-vcu-roi-zcu106-v2020.1-final/images/linux/image/rootfs.ext4
for Linux Rootfs
Right click on the zcu106_dpu project in the Explorer tab and click on Build Project to generate the platform as shown in below figure
The Console tab shows the status of the platform generation.
As shown in below image, zcu106_dpu.xpfm
is created under zcu106_dpu > export > zcu106_dpu > zcu106_dpu.xpfm
1.7.5 DPU Build
Clone the Vitis-AI repository and apply patch to add support of ZCU106 in Vitis DPU TRD.
The following tutorials assume that the $DPU_TRD_HOME
environment variable is set as given below.
The following tutorials assume that the Vitis and XRT environment variable is set as given below.
Open a Linux terminal. Set the Linux as Bash mode.
The default setting of DPU is B4096 with
RAM_USAGE_LOW
,CHANNEL_AUGMENTATION_ENABLE
,DWCV_ENABLE
,