Zynq UltraScale+ MPSoC ZCU106 VCU Multi-Stream ROI TRD using Avnet Quad Sensor 2021.2
This page provides all the information related to VCU Multi-Stream ROI TRD using Avnet Quad Sensor design for ZCU106.
Table of Contents
1 Overview
The primary goal of this VCU Multi-Stream ROI TRD using Avnet Quad Sensor design is to demonstrate the use of the Deep learning Processor Unit (DPU) block for extracting the Region of Interest (ROI) from input video frames and to use this information to perform ROI based encoding using the Video Codec Unit (VCU) encoder hard block present in Zynq UltraScale+ EV devices. Video is captured from the quad sensor connected through the MIPI CSI-2 Rx which is implemented in the PL. The Avnet Multi-Camera FMC module is used to capture four video streams through a MIPI CSI-2 interface.
The design will serve as a platform to accelerate Deep Neural Network inference algorithms using DPU and demonstrate the ROI feature of the VCU encoder. The design uses a Deep Convolutional Neural Network (CNN) named Densebox, running on DPU to extract ROI Information (for example, ‘face’ in this case). The design will also use the Vitis Video Analytics SDK (VVAS) framework to leverage its rich set of highly optimized and ready to use Kernels and GStreamer plugins.
The Design will use the Vivado IP Integrator flow for building the hardware platform and the Xilinx Yocto PetaLinux flow for software design. It will use Xilinx IP and Software drivers to demonstrate the capabilities of different components.
The Vitis platform will be created from the Vivado/PetaLinux build artifacts, and then the Vitis acceleration flow will be used to insert the DPU into the platform to create the final bitstream.
The following figure shows the streaming pipeline use-case with enhanced ROI + face detection model on a ZCU106 board. For a detailed view of the VVAS block, please refer to Section 1.3.3 - GStreamer Pipeline Flow.
Streaming: Face detection with enhanced ROI on ZCU106.
This ZCU106 VCU Multi Stream ROI TRD design supports the encoding feature of the VCU only. For decoding on the ZCU106 Board-2 setup, you would need to use the VCU TRD Multi Stream Video Capture and Display design.
1.1 System Architecture
The following figure shows the block diagram of the ROI design
1.2 Hardware Architecture
This section gives a detailed description of the blocks used in the hardware design. The functional block diagram of the design is shown in the below figure.
There are five primary Sections in the design.
MIPI Capture Pipeline:
Captures video frame buffers from the Capture source (Avnet Quad Sensor) at a resolution of 1080p30
The AXI Switch sends the Captured videos to Multiple streams using the Round-Robin method
Each stream writes the frame buffers into DDR Memory with the Frame Buffer Write IP
Multi-Scaler Block:
Reads the Video Buffers from DDR Memory for the first two sensors only
Scales down the buffer to the 640x360 size (suitable for DPU)
Converts the format from NV12 to BGR
Writes the downscaled buffer to DDR Memory
DPU Block:
Reads the downscaled buffers from DDR Memory only for the first two sensors only
Runs the Densebox algorithm to generate the ROI information for each frame buffer
Passes the ROI information to the VCU Encoder
VCU Encoder:
Reads the 4 x 1080p30 NV12 Buffer from DDR Memory
Receives the ROI metadata from the DPU IP from the first two sensors only.
Encodes the video buffers based on the ROI Information for the first two sensors
Encodes the video buffers for the other two sensors
Finally, it writes the encoded stream to DDR Memory
PS GEM:
Reads the Encoder stream from DDR Memory
Stream-out the encoded stream via Ethernet
This design supports the following video interfaces:
Sources |
|
---|---|
Sinks |
|
VCU Codec |
|
DPU | |
Streaming Interfaces | 1G Ethernet PS GEM |
Video Format | NV12 |
Supported Resolution | 4 x 1080P30 |
1.3 VCU ROI Software
1.3.1 Vitis Video Analytics SDK (VVAS)
VVAS is being developed to provide an easy to use and scalable framework which users will be able to use to build their solutions on Xilinx FPGA. VVAS provides infrastructure that will cover a wide variety of applications in Embedded, Vision, Datacenter, Machine Learning, Automotive and many other domains.
VVAS Provides a set of generic framework plugins that abstracts the complexities of writing a GStreamer plugin. These framework plugins interacts with the kernel libraries through a simple VVAS kernel interface. Using this VVAS Kernel interface, a user can easily integrate and test their kernels in the GStreamer framework.
VVAS also provides a rich set of highly optimized, ready to use, Kernels and GStreamer plugins, such as the video encoder, video decoder, multiscalar, ML, bounding box cropping etc. so that users can create their applications in a very short span of time.
VVAS will also provide the infrastructure needed to bridge the gap between Edge and Cloud solutions.
In the VCU Multi Stream ROI TRD, which is using an Avnet Quad Sensor design, we’ve used the following VVAS plugins:
vvas_xfilter: The vvas_xfilter efficiently works with hard-kernel/soft-kernel/software (user-space) acceleration software library types. It can operate in Passthrough/in-place/transform mode. In the Multi-Stream ROI TRD using Avnet Quad Sensor design, it was used in in-place mode with the below acceleration software library and can alter the input buffer.
vvas_xdpuinfer: The vvas_xdpuinfer is the acceleration software library that controls the DPU through the Vitis AI interface. The vvas_xdpuinfer does not modify the contents of the input buffer. The input buffer is passed to the Vitis AI model library that generates the inference data. This inference data is then mapped into the VVAS meta data structure and attached to the input buffer. The same input buffer is then pushed to the downstream plug-in.
vvas_xboundingbox: The vvas_xboundingbox acceleration software library is used to draw a bounding box and label information using the VVAS infrastructure plug-in vvas_xfilter. The vvas_xboundingbox interprets machine learning inference results from the vvas_xdpuinfer acceleration software library and uses an OpenCV library to draw the bounding box and label on the identified objects.
vvas_xmetaaffixer: It is used to scale the incoming metadata information for the different resolutions, where metadata received on the master sink pad is scaled in relation to the resolution output slave pads.
vvas_xroigen: This plug-in generates ROI metadata information, which is expected by the GStreamer OMX encoder plug-ins to encode raw frames with the desired quality parameters (QP) values/ level for specified ROIs.
Refer to the VVAS document for more detail on VVAS.
VVAS Top-level Block diagram
1.3.2 Deep Learning Processor Unit (DPU)
DPU is a programmable engine optimized for deep neural networks. It is a group of parameterizable IP cores pre-implemented on the hardware with no place and route required. The DPU is released with the Vitis AI specialized instruction set, allowing for efficient implementation of many deep learning networks.
Refer to DPU IP PG338 and UG1354 for more details about the DPU.
The following figure shows the DPU Top-Level Block Diagram.
DPU Top-level Block Diagram
PE - Processing Engine, DPU - Deep Learning Processor Unit, APU - Application Processing Unit
The DPU IP can be implemented in the programmable logic (PL) of the selected Zynq® UltraScale+™ MPSoC device with direct connections to the processing system (PS). The DPU requires instructions to implement a neural network and accessible memory locations for input images as well as temporary and output data. A program running on the Application Processing Unit (APU) is also required to service interrupts and coordinate data transfers.
The following figure shows the sequence of operations performed on the DPU device.
The following sequence of steps are performed to access and run face detection using the DPU device:
The DPU device is initialized
Instantiate a DPU Task from DPU Kernel and allocate the corresponding DPU memory buffer
Set the input image to created the DPU task
Run the DPU task to find the faces from the input image
The DPU device is uninitialized
1.3.3 GStreamer Pipelines Flow
The GStreamer plugin demonstrates the DPU capabilities with the Xilinx VCU encoder’s ROI (Region of Interest) feature. The plugin will detect ROI (i.e. face co-ordinates) from input frames using the DPU IP and pass the detected ROI information to the Xilinx VCU encoder. The following figure shows the data flow for the GStreamer pipeline of the stream-out use case.
Block Diagram of Stream-out Pipeline
fd = v4l2 frame data, fd' = DPU compatible frame data
As shown in the above figure, the stream-out GStreamer pipeline performs the below list of operations:
Sensors capture the stream through the camera and pass it to the FMC module which passes it to the MIPI CSI-2 Rx interface on the ZCU106 Board.
The MIPI CSI-2 Rx interface will capture the data in NV12 format and pass it to the tee element, which will split the input stream to the metaaffixer and preprocessor elements.
Preprocessors (v4l2convert GStreamer plugin) will scale-down the input frame resolution to 640x360 and convert the data into BGR format as per the input requirement of the DPU.
360p BGR frame will be provided to the DPU IP (via the xfilter plugin) as an input to find ROI (i.e. face co-ordinates).
Extracted ROI information will be passed to the VVAS Metaaffixer plug-in along with the original capture stream (via tee), which will embed the ROI metadata with the original stream.
The ROI Generator will generate ROI metadata and it is given to the Bounding box (via the xfilter plugin) as an input to draw bounding-boxes according to the ROI (i.e. face co-ordinates).
An updated stream will be passed to the VCU encoder, which will encode the input data by encoding ROI regions with high quality as compared to non-ROI region using received ROI information.
Stream-out the encoded data using the RTP protocol.
Use the below Stream-in use case with another ZCU106 Board along with the VCU TRD Multi Stream Video Capture and Display design.
The following figure shows the data flow for the GStreamer pipeline of the stream-in use case.
Block Diagram of Stream-in Pipeline
fd = VCU decoded frame data
As shown in the above figure, the stream-in GStreamer pipeline performs the below list of operations:
Stream-in the encoded data using the RTP protocol
The Xilinx VCU decoder will decode the data
Display the decoded data on the HDMI-Tx display
1.4 Software Tools and System Requirements
Hardware
Required:
Two ZCU106 rev 1.0 evaluation boards with power cables
Monitor with HDMI input supporting 3840x2160 resolution or 1920x1080 resolution (for example, an LG 27UD88, Samsung LU28ES90DS/XL)
HDMI 2.0 certified cable
Class-10 SD card
Ethernet cable
Optional:
USB pen drive formatted with the FAT32 file system and hub
SATA drive formatted with the FAT32 file system, external power supply, and data cable
Software Tools
Required:
Linux host machine for all tool flow tutorials (see UG1144 for detailed OS requirements)
PetaLinux Tools version 2021.2 (see UG1144 for installation instructions)
Git a distributed version control system
Serial terminal emulator, for example TeraTerm
Download, Installation, and Licensing of Vivado Design Suite 2021.2
The Vivado Design Suite User Guide explains how to download and install the Vivado® Design Suite tools, which include the Vivado Integrated Design Environment (IDE), High-Level Synthesis tool, and System Generator for DSP. This guide also provides information about licensing and administering evaluation and full copies of Xilinx design tools and intellectual property (IP) products. The Vivado Design Suite can be downloaded here.
LogiCORE IP Licensing
The following IP cores require a license to build the design.
Video Processing Subsystem (VPSS) - Included with Vivado - PG231
MIPI CSI Controller Subsystems (mipi_csi2_rx_subsystem) - Purchase license (Hardware evaluation available) - PG232
To obtain the LogiCORE IP license, please visit the respective IP product page and get the license.
(Xilinx Answer 44029) - Licensing - LogiCORE IP Core licensing questions
The below table provides performance information:
Resolution | FPS Achieved |
---|---|
4 x 1080p30 | 30 |
1.5 Board Setup
The below section will provide the information on the ZCU106 board setup for running the ROI design.
Connect the Micro USB cable into the ZCU106 Board Micro USB port J83, and the other end into an open USB port on the host PC. This cable is used for UART over USB communication.
Insert the SD card with the images copied onto the SD card slot J100. Please find here how to prepare the SD card for a specific design.
Set the SW6 switches as shown in the below Figure. This configures the boot settings to boot from SD.
Connect 12V Power to the ZCU106 6-Pin Molex connector.
For a USB storage device, connect the USB hub along with the mouse. (Optional)
For a SATA storage device, connect the SATA data cable to the SATA 3.0 port. (Optional)
For MIPI CSI-2, Insert the Avnet Multi-Camera FMC module into the FMC0 connector and set VADJ to 1.2V
Important Note: VADJ on the FMC0 connector must be set to 1.2V. See FMC VADJ Voltage Settings for more information.
Set up a terminal session between a PC COM port and the serial port on the evaluation board (See the Determine which COM to use to access the USB serial port on the ZCU106 board for more details).
Copy the VCU Multi Stream ROI TRD images into the SD card and insert the SD card on the board
The below images show how to connect interfaces on the ZCU106 board:
1.6 Run Flow
The VCU Multi Stream ROI TRD package is released with the source code, Vivado project, PetaLinux BSP, and SD card image that enable the user to run the demonstration.
It also includes the binaries necessary to configure and boot the ZCU106 board. Prior to running the steps mentioned in this wiki page, download the VCU Multi Stream ROI TRD package and extract its contents to the directory referred to as $TRD_HOME
which is the home directory.
See the below link to download the VCU Multi Stream ROI TRD package.
The TRD package contents are placed in the following directory structure.
rdf0617-zcu106-vcu-multi-stream-roi-2021-2/
├── apu
│ └── vcu_petalinux_bsp
│ └── xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final.bsp
├── dpu
│ ├── 0001-Added-ZCU106-configuration-to-support-DPU-in-ZCU106.patch
│ ├── dpu_conf.vh
│ └── vitis_platform
│ └── zcu106_dpu
├── image
│ ├── bootfiles
│ │ ├── bl31.elf
│ │ ├── linux.bif
│ │ ├── pmufw.elf
│ │ ├── system.bit
│ │ ├── system.dtb
│ │ ├── u-boot.elf
│ │ └── zynqmp_fsbl.elf
│ ├── license_zcu106_multistream_roi_trd_dpu_xclbin.txt
│ ├── README.txt
│ ├── sd_card
│ │ ├── boot
│ │ └── root
│ └── sd_card.img
├── pl
│ ├── constrs
│ │ ├── quad_mipi_rx_ROI.xdc
│ │ └── quad_sensor_async.xdc
│ ├── designs
│ │ └── zcu106_Quad_Sensor_ROI
│ ├── prebuild
│ │ └── zcu106_Quad_Sensor_ROI_wrapper.xsa
│ ├── README.md
│ └── srcs
│ └── hdl
├── README.txt
└── zcu106_vcu_multistream_roi_trd_sources_and_licenses.tar.gz
17 directories, 19 files
The below snippet shows the directory structure of various binary files placed in the $TRD_HOME/image/sd_card/boot
directory.
├── image
└──sd_card
└──boot
├── autostart.sh
├── bd.hwh
├── BOOT.BIN
├── boot.scr
├── dpu.xclbin
├── Image
├── quad_sensor_isp_tuning.sh
├── quad_sensor_media_graph_setting.sh
├── setup.sh
├── system.dtb
├── vcu
│ └── configure_qos.sh
├── vitis
│ └── densebox_640_360-zcu102_zcu104_kv260-r2.0.0.tar.gz
└── vvas
└── json
├── kernel_ML.json
└── kernel_swbbox.json
1.6.1 Preparing the SD card
There are three ways to prepare the SD card for booting. Each method is detailed below.
Using ready to test image
Flash the SD Card with
sd_card.img
using Etcher or Win32DiskImagerBoot the board with the flashed SD Card
sd_card.img
is available atrdf0617-zcu106-vcu-multi-stream-roi-2021-2/image/sd_card.img
All of the required Vitis packages are already installed in ready to test
rdf0617-zcu106-vcu-multi-stream-roi-2021-2/image/sd_card.img
Using Pre-built images
To Create SD Card with two partitions: Boot(FAT32+Bootable) and Root(EXT4) Refer to this Link.
Copy
boot
content fromrdf0617-zcu106-vcu-multi-stream-roi-2021-2/image/sd_card/boot
to the Boot partition in the SD CardExtract
rootfs.ext4
fromrdf0617-zcu106-vcu-multi-stream-roi-2021-2/image/sd_card/root
to the Root partition in the SD Card usingBoot the board with the flashed SD Card
Use the Output of the Build Flow
To Create an SD Card with two partitions: Boot(FAT32+Bootable) and Root(EXT4) Refer to this Link.
For the Build Flow, refer to these steps. Copy mentioned generated DPU build images
bd.hwh BOOT.BIN boot.scr dpu.xclbin Image system.dtb
into the BOOT partition of the SD card and extract generatedrootfs.ext4
into the ROOT partition of SD CardCopy the mentioned
boot
contentvcu, vitis, vvas, autostart.sh, setup.sh
from therdf0617-zcu106-vcu-multi-stream-roi-2021-2/image/sd_card/boot/
directory to the Boot partition in the SD CardBoot the board with the flashed SD Card
All of the required Densebox models are already available in the
rdf0617-zcu106-vcu-multi-stream-roi-2021-2/image/sd_card/boot/vitis
directory and are installed automatically during 1st time boot. Please wait until the target setup completes and models are installed.
1.6.2 GStreamer Pipelines using mediasrcbin plugin
This section covers the GStreamer pipelines using the mediasrcbin plugin for streaming ROI use-cases. This mediasrcbin plugin is a Xilinx specific plugin which is a bin element on top of v4l2src. It parses and configures the media graph of a media device automatically.
For more information on JSON configurations used in the following pipelines, please refer to VVAS JSON object members
Stream-out ( Server ):
→ v4l2convert → vvas_xfilter (DPU) →
Capture (Sensor-1) → tee -| |- vvas_xmetaaffixer → vvas_xroigen → vvas_xfilter (Bounding-box) → Encode → Stream-out
→→→→→→→→→→→→→→→→→→→
→ v4l2convert → vvas_xfilter (DPU) →
Capture (Sensor-2) → tee -| |- vvas_xmetaaffixer → vvas_xroigen → vvas_xfilter (Bounding-box) → Encode → Stream-out
→→→→→→→→→→→→→→→→→→→
Capture (Sensor-3) → Encode → Stream-out
Capture (Sensor-4) → Encode → Stream-out
Set IP address for server:
ifconfig eth0 192.168.25.90
Run the following
gst-launch-1.0
command for stream-out pipelineStream-out Pipeline
gst-launch-1.0 mediasrcbin media-device=/dev/media0 v4l2src0::io-mode=4 v4l2src1::io-mode=4 v4l2src2::io-mode=4 v4l2src3::io-mode=4 name=src src. ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! tee name=t0 t0. ! queue ! v4l2convert capture-io-mode=4 output-io-mode=5 ! video/x-raw, width=640, height=360, format=BGR ! queue ! vvas_xfilter kernels-config="/media/card/vvas/json/kernel_ML.json" ! scalem0.sink_master vvas_xmetaaffixer name=scalem0 t0. ! queue ! scalem0.sink_slave_0 scalem0.src_slave_0 ! queue min-threshold-buffers=2 max-size-bytes=0 max-size-buffers=4 max-size-time=0 ! vvas_xroigen roi-type=2 roi-qp-delta=-21 roi-max-num=50 ! queue ! vvas_xfilter kernels-config="/media/card/vvas/json/kernel_swbbox.json" ! queue ! omxh265enc qp-mode=roi gop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux0 ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5004 src. ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! tee name=t1 t1. ! queue ! v4l2video5convert capture-io-mode=4 output-io-mode=5 ! video/x-raw, width=640, height=360, format=BGR ! queue ! vvas_xfilter kernels-config="/media/card/vvas/json/kernel_ML.json" ! scalem1.sink_master vvas_xmetaaffixer name=scalem1 t1. ! queue ! scalem1.sink_slave_0 scalem1.src_slave_0 ! queue min-threshold-buffers=2 max-size-bytes=0 max-size-buffers=4 max-size-time=0 ! vvas_xroigen roi-type=2 roi-qp-delta=-21 roi-max-num=50 ! queue ! vvas_xfilter kernels-config="/media/card/vvas/json/kernel_swbbox.json" ! queue ! omxh265enc qp-mode=roi gop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux1 ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5008 src. ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=15000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux2 ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5012 src. ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=15000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux3 ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5016
Here 192.168.25.89
is the host/client IP address and 5004
, 5008
, 5012
and 5016
are port numbers.
Use the below Stream-in use case with another ZCU106 Board along with VCU TRD Multi Stream Video Capture and Display design
Stream-in ( Client ): 4 X (Stream-in→ Decode → Display)
Set the IP address for the client:
ifconfig eth0 192.168.25.89
Run the following
gst-launch-1.0
command for the stream-in pipeline where5004
,5008
,5012
&5016
are port numbers.Stream-in Pipeline
gst-launch-1.0 udpsrc port=5004 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! fpsdisplaysink text-overlay=false video-sink="kmssink bus-id="a0070000.v_mix" plane-id=34 render-rectangle=<0,0,1920,1080> hold-extra-sample=1 show-preroll-frame=false sync=true" sync=true udpsrc port=5008 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! fpsdisplaysink text-overlay=false video-sink="kmssink bus-id="a0070000.v_mix" plane-id=35 render-rectangle=<1920,0,1920,1080> hold-extra-sample=1 show-preroll-frame=false sync=true" sync=true udpsrc port=5012 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! fpsdisplaysink text-overlay=false video-sink="kmssink bus-id="a0070000.v_mix" plane-id=36 render-rectangle=<0,1080,1920,1080> hold-extra-sample=1 show-preroll-frame=false sync=true" sync=true udpsrc port=5016 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! fpsdisplaysink text-overlay=false video-sink="kmssink bus-id="a0070000.v_mix" plane-id=37 render-rectangle=<1920,1080,1920,1080> hold-extra-sample=1 show-preroll-frame=false sync=true" sync=true -v
1.6.3 GStreamer Pipelines using v4l2src plugin
This section covers GStreamer pipelines using the v4l2src plugin for streaming ROI use-cases.
Make sure that the MIPI CSI-2 Rx media pipeline is configured for 1080p resolution and that the source/sink have the same color format. Run the below script to set the resolution and format of the MIPI CSI-2 Rx media pipeline nodes where
"media0"
indicates the media node for the MIPI CSI-2 Rx input source.
$ sh /media/card/quad_sensor_media_graph_setting.sh
For more information on JSON configurations used in the following pipelines, please refer to VVAS JSON object members
Stream-out ( Server ):
→ v4l2convert → vvas_xfilter (DPU) →
Capture (Sensor-1) → tee -| |- vvas_xmetaaffixer → vvas_xroigen → vvas_xfilter (Bounding-box) → Encode → Stream-out
→→→→→→→→→→→→→→→→→→→
→ v4l2convert → vvas_xfilter (DPU) →
Capture (Sensor-2) → tee -| |- vvas_xmetaaffixer → vvas_xroigen → vvas_xfilter (Bounding-box) → Encode → Stream-out
→→→→→→→→→→→→→→→→→→→
Capture (Sensor-3) → Encode → Stream-out
Capture (Sensor-4) → Encode → Stream-out
Set the IP address for the server:
ifconfig eth0 192.168.25.90
Run the following
gst-launch-1.0
command for stream-out pipelineStream-out Pipeline
gst-launch-1.0 v4l2src device=/dev/video0 io-mode=4 ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! tee name=t0 t0. ! queue ! v4l2convert capture-io-mode=4 output-io-mode=5 ! video/x-raw, width=640, height=360, format=BGR ! queue ! vvas_xfilter kernels-config="/media/card/vvas/json/kernel_ML.json" ! scalem0.sink_master vvas_xmetaaffixer name=scalem0 t0. ! queue ! scalem0.sink_slave_0 scalem0.src_slave_0 ! queue min-threshold-buffers=2 max-size-bytes=0 max-size-buffers=4 max-size-time=0 ! vvas_xroigen roi-type=2 roi-qp-delta=-21 roi-max-num=50 ! queue ! vvas_xfilter kernels-config="/media/card/vvas/json/kernel_swbbox.json" ! queue ! omxh265enc qp-mode=roi gop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux0 ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5004 v4l2src device=/dev/video1 io-mode=4 ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! tee name=t1 t1. ! queue ! v4l2video5convert capture-io-mode=4 output-io-mode=5 ! video/x-raw, width=640, height=360, format=BGR ! queue ! vvas_xfilter kernels-config="/media/card/vvas/json/kernel_ML.json" ! scalem1.sink_master vvas_xmetaaffixer name=scalem1 t1. ! queue ! scalem1.sink_slave_0 scalem1.src_slave_0 ! queue min-threshold-buffers=2 max-size-bytes=0 max-size-buffers=4 max-size-time=0 ! vvas_xroigen roi-type=2 roi-qp-delta=-21 roi-max-num=50 ! queue ! vvas_xfilter kernels-config="/media/card/vvas/json/kernel_swbbox.json" ! queue ! omxh265enc qp-mode=roi gop-mode=basic gop-length=60 b-frames=0 target-bitrate=1500 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux1 ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5008 v4l2src device=/dev/video2 io-mode=4 ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=15000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux2 ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5012 v4l2src device=/dev/video3 io-mode=4 ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=15000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! h265parse ! queue ! mpegtsmux alignment=7 name=mux3 ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5016
Here 192.168.25.89
is the host/client IP address and 5004
, 5008
, 5012
and 5016
are port numbers.
Use the below Stream-in use case with another ZCU106 Board along with the VCU TRD Multi Stream Video Capture and Display design
Stream-in ( Client ): 4 X (Stream-in→ Decode → Display)
Set the IP address for the client:
ifconfig eth0 192.168.25.89
Run the following
gst-launch-1.0
command for the stream-in pipeline where5004
,5008
,5012
and5016
are port numbers.Stream-in Pipeline
gst-launch-1.0 udpsrc port=5004 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! fpsdisplaysink text-overlay=false video-sink="kmssink bus-id="a0070000.v_mix" plane-id=34 render-rectangle=<0,0,1920,1080> hold-extra-sample=1 show-preroll-frame=false sync=true" sync=true udpsrc port=5008 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! fpsdisplaysink text-overlay=false video-sink="kmssink bus-id="a0070000.v_mix" plane-id=35 render-rectangle=<1920,0,1920,1080> hold-extra-sample=1 show-preroll-frame=false sync=true" sync=true udpsrc port=5012 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! fpsdisplaysink text-overlay=false video-sink="kmssink bus-id="a0070000.v_mix" plane-id=36 render-rectangle=<0,1080,1920,1080> hold-extra-sample=1 show-preroll-frame=false sync=true" sync=true udpsrc port=5016 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! fpsdisplaysink text-overlay=false video-sink="kmssink bus-id="a0070000.v_mix" plane-id=37 render-rectangle=<1920,1080,1920,1080> hold-extra-sample=1 show-preroll-frame=false sync=true" sync=true -v
1.7 Build Flow
Refer to the below link to download the VCU Multi Stream ROI TRD package.
Unzip the released package.
unzip </path/to/downloaded/zipfile>/rdf0617-zcu106-vcu-multi-stream-roi-2021-2.zip
The following tutorials assume that the $TRD_HOME
environment variable is set as shown below.
export TRD_HOME=</path/to/downloaded/zipfile>/rdf0617-zcu106-vcu-multi-stream-roi-2021-2
1.7.1 Hardware Build Flow
This section explains the steps to build the hardware platform and generate XSA using the Vivado tool.
Refer to the Vivado Design Suite User Guide: Using the Vivado IDE, UG893, for setting up the Vivado environment.
Refer to the vivado-release-notes-install-license(UG973) for installation.
Make sure that the necessary IP licenses are in place.
On Linux:
Open a Linux terminal
Change directory to
$TRD_HOME/pl
folderSource Vivado
settings.sh
bash source <path/to/Vivado-installer>/tool/Vivado/2021.2/settings64.sh
Run the following command to create the Vivado IPI project and invoke the GUI and generate the XSA required for the platform
vivado -source ./designs/zcu106_Quad_Sensor_ROI/project.tcl
The project.tcl script does the following
Creates the project in the ../pl/build/zcu106_Quad_Sensor_ROI directory
Creates the IPI Block design with platform interfaces
Runs Synthesis and Implementation
Builds bitstream with no accelerators
Exports the HW to XSA (zcu106_Quad_Sensor_ROI_wrapper.xsa)
zcu106_Quad_Sensor_ROI_wrapper.xsa is stored at location
$TRD_HOME/pl/build/zcu106_Quad_Sensor_ROI/zcu106_Quad_Sensor_ROI.xsa/
This XSA is used by PetaLinux for platform creation and also by the Vitis Tool for DPU Kernel Integration.
After executing the script, the Vivado IPI block design comes up as shown in the below figure.
The Platform Setup tab has the settings and AXI Ports, as shown in the below image
1.7.1.1 Platform Interfaces
The screenshots below show the platform interfaces that have been made available to the Vitis tool for linking the acceleration IP dynamically.
In the case of this reference design, the DPU Kernel will be inserted.
After the DPU Kernel is integrated dynamically with the platform using Vitis Flow, the connections are as shown below:
The DPU Data ports are connected to the HP0 Port(S_AXI_HP0_FPD) of the PS
The DPU Instruction port is connected to the S_AXI_HPC1 port of the PS
The DPU S_AXI_Control port is connected to the M_AXI_HPM0_LPD port of the PS through interconnect_hpm0_lpd
The DPU interrupt is connected to the AXI interrupt controller dynamically
1.7.2 PetaLinux build Flow
This tutorial shows how to build the Linux image and boot image using the PetaLinux build tool.
PetaLinux Installation: Refer to the PetaLinux Tools Documentation (UG1144) for installation.
Kernel patches Documentation: Refer this article for Kernel patches required for ZCU106 VCU Multi-Stream ROI TRD using Avnet Quad Sensor BSP.
It is recommended to follow the build steps in sequence
Source the PetaLinux
settings.sh
bash source <path/to/petalinux-installer>/tool/petalinux-v2021.2-final/settings.sh
Create the PetaLinux project
cd $TRD_HOME/apu/vcu_petalinux_bsp petalinux-create -t project -s xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final.bsp
Configure the PetaLinux project
cd xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final petalinux-config --silentconfig --get-hw-description=<Path to directory of XSA>
For example:
using the prebuild XSA
petalinux-config --silentconfig --get-hw-description=$TRD_HOME/pl/prebuild/
using the XSA generated by running the Hardware
project.tcl
scriptspetalinux-config --silentconfig --get-hw-description=$TRD_HOME/pl/build/zcu106_Quad_Sensor_ROI/zcu106_Quad_Sensor_ROI.xsa/
zcu106_Quad_Sensor_ROI_wrapper.xsa
is available in directory zcu106_Quad_Sensor_ROI.xsa
Build the PetaLinux project
petalinux-build
1.7.3 Prepare Build Artifacts for Platform Creation
To prepare artifacts required for ZCU106 Vitis platform creation, follow the below steps after PetaLinux build
Go to the PetaLinux build image directory
cd $TRD_HOME/apu/vcu_petalinux_bsp/xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final/images/linux
Create
linux.bif
file as below in theimages/linux
directory.linux.bif
file is required to create the ZCU106 Vitis platform which has information related to boot components. After ZCU106 Vitis platform creation, thislinux.bif
file will be part of the platform, which is required to build the DPU and generate the finalBOOT.BIN
/* linux */ the_ROM_image: { [bootloader, destination_cpu = a53-0] <zynqmp_fsbl.elf> [pmufw_image] <pmufw.elf> [destination_device=pl] <bitstream> [destination_cpu=a53-0, exception_level=el-3, trustzone] <bl31.elf> [destination_cpu=a53-0, load=0x00100000] <system.dtb> [destination_cpu=a53-0, exception_level=el-2] <u-boot.elf> }
Copy the generated images into the boot and image directory by following the below commands. Use the created
linux.bif
to copy into the boot directory.mkdir boot image cp linux.bif bl31.elf pmufw.elf system.dtb u-boot.elf zynqmp_fsbl.elf boot/. cp boot.scr system.dtb image/.
1.7.4 ZCU106 Platform Creation
This section shows how to create a Vitis acceleration platform for the ZCU106 using the Vitis IDE.
A ready to use Vitis platform is available at
$TRD_HOME/dpu/vitis_platform/zcu106_dpu
It is recommended to follow the build steps in sequence
source <vitis install path>/Vitis/2021.2/settings64.sh
vitis &
Choose the project workspace and click on Launch to begin
User can create the Workspace in any directory, for example, $TRD_HOME/vitis_workspace
Launch the New Platform Project dialog box using the following step:
Go to File > New > Platform Project
Provide a project name “zcu106_dpu“ in the Platform project name field and click Next as shown in the below figure:
In the Platform Project dialog box, choose Create a new platform from hardware specification (XSA) and provide the XSA path
For a prebuild XSA use the
$TRD_HOME/pl/prebuild/
pathFor a generated XSA using the Vivado build use the
$TRD_HOME/pl/build/zcu106_Quad_Sensor_ROI/zcu106_Quad_Sensor_ROI.xsa/
path
Use the below setting under Software Specification
Select
linux
as the operating system,psu_cortexa53
as a processor, and64-bit
architecture to create the platformUncheck the box for Generate boot components
Click Finish to create your platform project
Go to zcu106_dpu > psu_cortexa53 > linux on psu_cortexa53 and add the path of the required files in Domain: linux_domain
Give the bif file, boot directory and image path as shown in the below figure
Use
<Path to Petalinux Project>/xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final/images/linux/boot/linux.bif
for Bif FileUse
<Path to Petalinux Project>/xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final/images/linux/boot/
for Boot Components DirectoryUse
<Path to Petalinux Project>/xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final/images/linux/image
for FAT32 Partition Directory
Right click on the zcu106_dpu project in the Explorer tab and click on Build Project to generate the platform as shown in the below figure.
The Console tab shows the status of the platform generation.
As shown in the below image, zcu106_dpu.xpfm
is created under zcu106_dpu > export > zcu106_dpu > zcu106_dpu.xpfm
1.7.5 DPU Build
It is recommended to follow the build steps in sequence
Make sure the parted utility is installed for DPU build
Clone the Vitis-AI repository and apply the patch to add support for the ZCU106 in the Vitis DPU TRD.
git clone https://github.com/Xilinx/Vitis-AI.git
cd Vitis-AI
git checkout tags/v2.0
git am $TRD_HOME/dpu/0001-Added-ZCU106-configuration-to-support-DPU-in-ZCU106.patch
The following tutorials assume that the DPU_TRD_HOME
environment variable is set as given below.
export DPU_TRD_HOME =<Vitis AI path>/dsa/DPU-TRD
The following tutorials assume that the Vitis and XRT environment variable is set as given below.
Open a Linux terminal. Set Linux to Bash mode.
source <vitis install path>/Vitis/2021.2/settings64.sh source <xrt install path>/packages/setenv.sh
The default setting of the DPU is B4096 with
RAM_USAGE_LOW, CHANNEL_AUGMENTATION_ENABLE, DWCV_ENABLE, POOL_AVG_ENABLE, RELU_LEAKYRELU_RELU6
. Read the$DPU_TRD_HOME/prj/Vitis/dpu_conf.vh
file to get the details of DPU. You can get all of the configurations from PG338. Modify the$DPU_TRD_HOME/prj/Vitis/dpu_conf.vh
file to change the default settings.Copy the dpu_conf.vh file to enable the URAM for the ZCU106 DPU Build. The DPU will replace the BRAM with the URAM.
cp $TRD_HOME/dpu/dpu_conf.vh $DPU_TRD_HOME/prj/Vitis/dpu_conf.vh
Build the hardware design:
cd $DPU_TRD_HOME/prj/Vitis export EDGE_COMMON_SW=<Path to Petalinux Project>/xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final/images/linux/ export SDX_PLATFORM=<Path to Vitis Workspace>/zcu106_dpu/export/zcu106_dpu/zcu106_dpu.xpfm make KERNEL=DPU DEVICE=zcu106_dpu
Generated SD card files are in
$DPU_TRD_HOME/prj/Vitis/binary_container_1/sd_card
(SD card Format)Copy the generated DPU build images from the
$DPU_TRD_HOME
directory to the$TRD_HOME
directorycd $DPU_TRD_HOME/prj/Vitis/binary_container_1/sd_card cp BOOT.BIN boot.scr dpu.xclbin Image system.dtb $TRD_HOME/image/sd_card/boot cp $TRD_HOME/apu/vcu_petalinux_bsp/xilinx-vcu-multi-stream-roi-zcu106-v2021.2-final/images/linux/rootfs.ext4 $TRD_HOME/image/sd_card/root
For more details, please refer to the DPU Build Flow
For more details on Vitis AI, check the User Guide (UG1414)
2 Other Information
2.1 Known Issues
For PetaLinux related known issues please see PetaLinux 2021.2 - Product Update Release Notes and Known Issues.
For VCU related known issues please see (Xilinx Answer 66763) LogiCORE H.264/H.265 Video Codec Unit (VCU) - Release Notes and Known Issues and Xilinx Zynq UltraScale+ MPSoC Video Codec Unit.
2.2 Limitations
For PetaLinux related limitations please see PetaLinux 2021.2 - Product Update Release Notes and Known Issues.
For VCU related limitations please refer (Xilinx Answer 66763) LogiCORE H.264/H.265 Video Codec Unit (VCU) - Release Notes and Known Issues, Xilinx Zynq UltraScale+ MPSoC Video Codec Unit and PG252 link.
3 Appendix A - MIPI CSI-2 Rx Link-up
This section covers configuration of the CSI-2 Rx using the
media-ctl
utility, along with demonstrating CSI-2 Rx link-up issues.Run the below command for the media node to print the media device topology where
"media0"
represents the media node.media-ctl -p -d /dev/media0
When the MIPI CSI-2 Rx source is connected, it looks like the following:
root@vcu_roi_trd-2021_2:/media/card# media-ctl -p -d /dev/media0
Media controller API version 5.10.0
Media device information
------------------------
driver xilinx-video
model Xilinx Video Composite Device
serial
bus info
hw revision 0x0
driver version 5.10.0
Device topology
- entity 1: vcap_csi_p0_scalar_0 output 0 (1 pad, 1 link)
type Node subtype V4L flags 0
device node name /dev/video0
pad0: Sink
<- "a0040000.v_proc_ss":1 [ENABLED]
- entity 5: vcap_csi_p0_scalar_0 output 1 (1 pad, 1 link)
type Node subtype V4L flags 0
device node name /dev/video1
pad0: Sink
<- "a0080000.v_proc_ss":1 [ENABLED]
- entity 9: vcap_csi_p0_scalar_0 output 2 (1 pad, 1 link)
type Node subtype V4L flags 0
device node name /dev/video2
pad0: Sink
<- "a00c0000.v_proc_ss":1 [ENABLED]
- entity 13: vcap_csi_p0_scalar_0 output 3 (1 pad, 1 link)
type Node subtype V4L flags 0
device node name /dev/video3
pad0: Sink
<- "a0140000.v_proc_ss":1 [ENABLED]
- entity 17: amba_pl@0:axis_switch_csi_axis_ (5 pads, 5 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev0
pad0: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none colorspace:srgb]
<- "a0000000.mipi_csi2_rx_subsystem":1 [ENABLED]
pad1: Source
[fmt:SGRBG8_1X8/1920x1080 field:none colorspace:srgb]
-> "a0190000.v_demosaic":0 [ENABLED]
pad2: Source
[fmt:SGRBG8_1X8/1920x1080 field:none colorspace:srgb]
-> "a01c0000.v_demosaic":0 [ENABLED]
pad3: Source
[fmt:SGRBG8_1X8/1920x1080 field:none colorspace:srgb]
-> "a01f0000.v_demosaic":0 [ENABLED]
pad4: Source
[fmt:SGRBG8_1X8/1920x1080 field:none colorspace:srgb]
-> "a0220000.v_demosaic":0 [ENABLED]
- entity 23: a0000000.mipi_csi2_rx_subsystem (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev1
pad0: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "max9286 23-0048":4 [ENABLED]
pad1: Source
[fmt:SGRBG8_1X8/1920x1080 field:none]
-> "amba_pl@0:axis_switch_csi_axis_":0 [ENABLED]
- entity 26: a0190000.v_demosaic (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev2
pad0: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "amba_pl@0:axis_switch_csi_axis_":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0180000.v_gamma_lut":0 [ENABLED]
- entity 29: a01c0000.v_demosaic (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev3
pad0: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "amba_pl@0:axis_switch_csi_axis_":2 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a01b0000.v_gamma_lut":0 [ENABLED]
- entity 32: a01f0000.v_demosaic (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev4
pad0: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "amba_pl@0:axis_switch_csi_axis_":3 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a01e0000.v_gamma_lut":0 [ENABLED]
- entity 35: a0220000.v_demosaic (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev5
pad0: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "amba_pl@0:axis_switch_csi_axis_":4 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0210000.v_gamma_lut":0 [ENABLED]
- entity 38: a0180000.v_gamma_lut (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev6
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a0190000.v_demosaic":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0010000.v_proc_ss":0 [ENABLED]
- entity 41: a01b0000.v_gamma_lut (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev7
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a01c0000.v_demosaic":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0020000.v_proc_ss":0 [ENABLED]
- entity 44: a01e0000.v_gamma_lut (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev8
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a01f0000.v_demosaic":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0030000.v_proc_ss":0 [ENABLED]
- entity 47: a0210000.v_gamma_lut (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev9
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a0220000.v_demosaic":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0100000.v_proc_ss":0 [ENABLED]
- entity 50: a0010000.v_proc_ss (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev10
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a0180000.v_gamma_lut":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0040000.v_proc_ss":0 [ENABLED]
- entity 53: a0020000.v_proc_ss (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev11
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a01b0000.v_gamma_lut":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0080000.v_proc_ss":0 [ENABLED]
- entity 56: a0030000.v_proc_ss (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev12
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a01e0000.v_gamma_lut":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a00c0000.v_proc_ss":0 [ENABLED]
- entity 59: a0100000.v_proc_ss (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev13
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a0210000.v_gamma_lut":1 [ENABLED]
pad1: Source
[fmt:RBG888_1X24/1920x1080 field:none]
-> "a0140000.v_proc_ss":0 [ENABLED]
- entity 62: a0040000.v_proc_ss (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev14
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a0010000.v_proc_ss":1 [ENABLED]
pad1: Source
[fmt:VYYUYY8_1X24/1920x1080 field:none]
-> "vcap_csi_p0_scalar_0 output 0":0 [ENABLED]
- entity 65: a0080000.v_proc_ss (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev15
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a0020000.v_proc_ss":1 [ENABLED]
pad1: Source
[fmt:VYYUYY8_1X24/1920x1080 field:none]
-> "vcap_csi_p0_scalar_0 output 1":0 [ENABLED]
- entity 68: a00c0000.v_proc_ss (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev16
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a0030000.v_proc_ss":1 [ENABLED]
pad1: Source
[fmt:VYYUYY8_1X24/1920x1080 field:none]
-> "vcap_csi_p0_scalar_0 output 2":0 [ENABLED]
- entity 71: a0140000.v_proc_ss (2 pads, 2 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev17
pad0: Sink
[fmt:RBG888_1X24/1920x1080 field:none]
<- "a0100000.v_proc_ss":1 [ENABLED]
pad1: Source
[fmt:VYYUYY8_1X24/1920x1080 field:none]
-> "vcap_csi_p0_scalar_0 output 3":0 [ENABLED]
- entity 74: max9286 23-0048 (5 pads, 5 links)
type V4L2 subdev subtype Unknown flags 0
device node name /dev/v4l-subdev18
pad0: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "mars 32-0041":0 [ENABLED,IMMUTABLE]
pad1: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "mars 33-0042":0 [ENABLED,IMMUTABLE]
pad2: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "mars 34-0043":0 [ENABLED,IMMUTABLE]
pad3: Sink
[fmt:SGRBG8_1X8/1920x1080 field:none]
<- "mars 35-0044":0 [ENABLED,IMMUTABLE]
pad4: Source
[fmt:SGRBG8_1X8/1920x1080 field:none]
-> "a0000000.mipi_csi2_rx_subsystem":0 [ENABLED]
- entity 80: mars 32-0041 (1 pad, 1 link)
type V4L2 subdev subtype Unknown flags 20001
device node name /dev/v4l-subdev19
pad0: Source
[fmt:SGRBG8_1X8/1920x1080 field:none]
-> "max9286 23-0048":0 [ENABLED,IMMUTABLE]
- entity 84: mars 33-0042 (1 pad, 1 link)
type V4L2 subdev subtype Unknown flags 20001
device node name /dev/v4l-subdev20
pad0: Source
[fmt:SGRBG8_1X8/1920x1080 field:none]
-> "max9286 23-0048":1 [ENABLED,IMMUTABLE]
- entity 88: mars 34-0043 (1 pad, 1 link)
type V4L2 subdev subtype Unknown flags 20001
device node name /dev/v4l-subdev21
pad0: Source
[fmt:SGRBG8_1X8/1920x1080 field:none]
-> "max9286 23-0048":2 [ENABLED,IMMUTABLE]
- entity 92: mars 35-0044 (1 pad, 1 link)
type V4L2 subdev subtype Unknown flags 20001
device node name /dev/v4l-subdev22
pad0: Source
[fmt:SGRBG8_1X8/1920x1080 field:none]
-> "max9286 23-0048":3 [ENABLED,IMMUTABLE]
root@vcu_roi_trd-2021_2:/media/card#
When the MIPI CSI-2 Rx source is not connected, it will not show any media node.
Related content
© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy