Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


When using Low Latency mode (LLP1/LLP2), The encoder and decoder are limited by the number of internal cores. The encoder has a maximum of four streams and the decoder has a maximum of two streams.

The below table gives information about the features supported in this design. 


TRD package contents are placed in the following directory structure. The user needs to copy all the files from the $TRD_HOME/images/vcu_llp2_hdmi_nv12/ to the FAT32 formatted SD card directory.


  • The single streams configs (1-1080p60, 1-4kp30 and 1-4kp60) supports support Audio and Video both.

  • As llp2 stream-in is not supported with vcu-gst-app, we have added sample shell scripts containing relevant GStreamer commands for all Stream-in use-cases. User can modify the scripts as per convenience, or can directly use GStreamer pipelines provided in this wiki page.


The vcu_gst_app is a command-line multi-threaded linux Linux application. The command-line application requires an input configuration file (input.cfg) to be provided in the plain text.

Run below modetest command to set CRTC configurations for 4kp60:


  • Make sure HDMI-Rx should be configured to 4kp60 mode , while running below example pipelines.

  • Low latency(LLP1/LLP2) video and audio+video stream-in pipelines are not supported in vcu_gst_app.

  • The vcu_gst_app uses RTP+RTCP streaming and opus encoder for LLP1/LLP2 audio+video stream-out use-cases.

  • All single-stream serial/streaming pipelines have audio configuration ON by default. To execute only display pipeline, change the Audio Enable property to FALSE in the configuration file.


Code Block
$ vcu_gst_app /media/card/config/1-4kp60/Stream-out/Single_4kp60_HEVC_25_Mbps.cfg

4kp60 NV12 HEVC ultra-low-latency(LLP2) video stream-in pipeline execution.


Code Block
$ gst-launch-1.0 udpsrc port=5004 buffer-size=60000000 caps="application/x-rtp, media=video, clock-rate=90000, payload=96, encoding-name=H265" ! rtpjitterbuffer latency=7 ! rtph265depay ! h265parse ! video/x-h265, alignment=nal ! omxh265dec low-latency=1 internal-entropy-buffers=5 ! video/x-raw\(memory:XLNXLL\) ! queue max-size-bytes=0 ! fpsdisplaysink name=fpssink text-overlay=false 'video-sink=kmssink bus-id=a0070000.v_mix hold-extra-sample=1 show-preroll-frame=false sync=true ' sync=true -v

4kp60 NV12 HEVC ultra-low-latency(LLP2) audio+video stream-in pipeline execution. where , is the server’s IP address.

Code Block
$ sh /media/card/config/1-4kp60/Stream-in/


Use GST_TRACERS="interlatency" in the case of Xilinx’s ultra-low-latency NV12 audio+video stream-out pipelines.


2.1 Overview

The primary goal of the v4l2 capture control software encoder application is to demonstrate the Xilinx’s Ultra Low-Latency feature using the VCU ctrlsw APIs. This application (v4l2_capture_ctrlsw_enc) is an enhanced version of normal VCU ctrlsw app (ctrlsw_encoder). Normal ctrlsw_encoder application is only capable for of file-based encoding, while this app captures data from the HDMI source and does stream-out using Gstreamer libraries.

The v4l2 capture ctrlsw encoder application has the following features:

  • Stream out encoded data captured from the HDMI source using RTP streaming.

  • Record encoded data captured from HDMI source to a file.

  • Supports various encoding options, can be set by config file as an input to the application, similar as to how config file is used for ctrlsw_encoder.

  • Supports various latency modes e.g. Xilinx’s Ultra Low Latency (LLP2) mode via --xlnx-slicelat and Low Latency (LLP1) mode via --slicelat, can be set via command line


As shown in the above figure, the app performs the below list of operations in case of Xilinx’s Ultra Low Latency (LLP2) mode:

  1. Application The application enables syncip and programs address ranges as per input video format and resolution.

  2. Application calls VIDIOC_DQBUF and sends empty input buffer to encoder using early dequeue mechanism.

  3. Encoder The encoder receives this empty buffer and starts generating the read requestrequests.

  4. Start DMA command is issued to v4l2 capture driver and capture starts filling the buffer.

  5. SyncIP blocks the encoder until framebuffer-write is done writing data corresponding to the read request made by the encoder.

  6. Once the encoder is unblocked, it starts encoding data and generating output slices corresponding to unblocked input read requests.

  7. Encoded data is feed to Gstreamer AppSrc, and it passed to UDP sink through the RTP payloader to stream-out the encoded data.

  8. Similarly, for consecutive buffers v4l2 programs SyncIP, submits buffer to encoder using VIDIOC_DQBUF and syncip block blocks the encoder until v4l2 has written sufficient data. This way syncip maintains the synchronization between the producer (v4l2) and consumer (encoder).

In the case of --slicelat (llp1) there will by be no syncip in the input path to the encoder  and and the application gets the input frame filled by v4l2 using VIDIOC_DQBUF which is  is passed to the encoder. The encoder then reads the input frame and generates output slices as per the number of slices set in the configuration file which are then streamed out as depicted in the above section.

The below figure shows the v4l2 capture control software encoder application software block diagram:



This tutorial shows how to build the above v4l2 control software encoder application’s AR package to generate Linux and boot image using the PetaLinux build tool. It assume assumes that the $TRD_HOME environment variable is set as given below.


  • Source the Petalinux tool-chain using the below command

Code Block
$ source <path/to/petalinux-installer>/tool/petalinux-v2020.2-final/

Post-PetaLinux installation $PETALINUX environment variable should be set.