Zynq UltraScale+ MPSoC VCU TRD 2019.2 - Multi stream Audio-Video Design
Zynq UltraScale+ MPSoC VCU TRD 2019.2 - Multi stream Audio-Video Design
Table of Contents
1 Overview
The primary goal of this Design is to demonstrate the capabilities of VCU hard block present in Zynq UltraScale+ EV devices with soft audio codec. The TRD will serve as a platform to tune the performance parameters of VCU and arrive at optimal configurations for encoder and decoder blocks with audio-video synchronization.
This design supports the following video interfaces:
Sources:
- HDMI-Rx capture pipeline implemented in the PL.
- MIPI CSI-2 Rx capture pipeline implemented in the PL.
- File source (SD card, USB storage, SATA hard disk).
- Stream-In from network or internet.
Sinks:
- DP Tx display pipeline in the PS.
- HDMI-Tx display pipeline implemented in the PL.
VCU Codec:
- Video Encode/Decode capability using VCU hard block in PL
- AVC/HEVC encoding.
- Encoder/decoder parameter configuration.
Streaming Interfaces:
- 1G Ethernet PS GEM
Video format:
- NV12
Audio Configuration:
- Codec: AAC
- Format: S24_32LE
- Channel: 2
- Sampling rate: 48kHz
- Source: HDMI-Rx/ I2S-Rx
- Renderer: HDMI-Tx/ I2S-Tx/DP
Audio Deliverables:
Pipeline | Video-Input source | Audio Input Source | Video Output Type | Audio Output Type | ALSA drivers | Resolution | Audio Codec Type | Audio Configuration | Video encode/Decoder type | Deliverables |
Record/Stream-Out pipeline | 1.HDMI-Rx | 1.HDMI-Rx 2.I2S-Rx | File-Sink Stream-Out | File-Sink Stream-Out | HDMI-Rx ALSA drivers | 4K/1080p | AAC | 2 channel @ 48 kHz | HEVC/AVC | HDMI Rx Audio encode with soft codec and video with VCU and store it in a container format. |
Playback pipeline | File Source/ Stream-In | File Source/ Stream-In | DP HDMI –Tx | 1.HDMI-Tx 2.I2S-Tx3. DP | HDMI-Tx ALSA drivers | 4K/1080p | AAC | 2 channel @ 48 kHz | HEVC/AVC | Playback of the local-file/stream-in with video decoded using VCU and Audio using GStreamer soft codec. |
Capture--> Display | 1.HDMI-Rx | 1.HDMI-Rx 2.I2S-Rx | DP HDMI -Tx | 1.HDMI-Tx 2.I2S-Tx 3. DP | HDMI-Rx/Tx ALSA drivers | 4K/1080p | NA | 2 channel @ 48 kHz | HEVC/AVC | HDMI Rx Audio /Video pass to HDMI-Tx without VCU/Audio-Codec. |
Capture--> Encode--> Decode--> Display | 1.HDMI-Rx | 1.HDMI-Rx 2.I2S-Rx | DP HDMI -Tx | 1.HDMI-Tx 2.I2S-Tx 3. DP | HDMI-Rx/Tx ALSA drivers. | 4K/1080p | NA | 2 channel @ 48 kHz | HEVC/AVC | HDMI Rx raw audio and video with VCU encoder and decode to achieve AV sync. |
- Supports 1-4KP60 Single Stream with either HDMI-Rx/I2S Rx as input Audio source + HDMI-Rx/MIPI Rx as input Video source and HDMI-Tx/I2S Tx as Output Audio Sink +HDMI-Tx/DP as Output Video sink pipeline
- Supports 1-4KP30 Single Stream with either HDMI-Rx/I2S Rx as input Audio source + HDMI-Rx/MIPI Rx as input Video source and HDMI-Tx/I2S Tx as Output Audio Sink +HDMI-Tx/DP as Output Video sink pipeline
- Supports 1-1080P60 Single Stream with either HDMI-Rx/I2S Rx as input Audio source + HDMI-Rx/MIPI Rx as input Video source and HDMI-Tx/I2S Tx as Output Audio Sink +HDMI-Tx/DP as Output Video sink pipeline
- Supports 2-4KP30 multi-stream feature with HDMI-Rx and I2S Rx as input Audio source + HDMI-Rx and MIPI Rx as an input Video source with HDMI-Tx and I2S Tx as Output Audio Sink +HDMI-Tx as Output Video sink pipeline
- Supports 2-1080p60 multi-stream feature with HDMI-Rx and I2S Rx as input Audio source + HDMI-Rx and MIPI Rx as an input Video source with HDMI-Tx and I2S Tx as Output Audio Sink +HDMI-Tx as Output Video sink pipeline
Other features:
- This design supports single-channel stream-based SCD(Scene Change Detection) IP only for the HDMI input source. SCD must be enabled for the HDMI input source through configuration.
Supported Resolution:
The table below provides the supported resolution from GUI and command-line app in this design.
Resolution | GUI | Command Line | |
Single Stream | Single Stream | Multi-stream | |
4kp60 | X | √ | NA |
4kp30 | √ | √ | √(Max 2) |
1080p60 | √ | √ | √(Max 2) |
√ - Supported
NA – Not applicable
x – Not supported
The below sections describe the HDMI/MIPI Video Capture and HDMI Display with the Audio from HDMI/I2S design. It is VCU TRD design supporting HDMI-Rx audio/video +HDMI-Tx with Audio/video and MIPI-Rx video +I2S Rx audio with HDMI Tx video + I2S Tx audio. For the overview, software tools, system requirements, and design files follow the link below:
The below figure shows the HDMI, MIPI Video Capture along with HDMI, I2S Audio Capture and HDMI Display with Audio design hardware block diagram.
The below figure shows the HDMI, MIPI Video Capture along with HDMI, I2S Audio Capture and HDMI Display with Audio design software block diagram.
1.1 Board Setup
Refer below link for Board Setup
- Zynq UltraScale+ MPSoC VCU TRD 2019.2 Board Setup
- I2S Audio. signals from MPSoC PL fabric are connected to PMOD0 GPIO Header (J55 - right angle female connector )
- Pmod I2S2 Add on card connects to J55 connector and its Master/Slave select jumper (JP1) should be placed into the Slave (SLV) position
1.2 Run Flow
The TRD package is released with the source code, Vivado project, Petalinux BSP, and SD card image that enables the user to run the demonstration. It also includes the binaries necessary to configure and boot the ZCU106 board. Prior to running the steps mentioned in this wiki page, download the TRD package and extract its contents to a directory referred to as ‘TRD_HOME' which is the home directory.
Refer below link to download all TRD contents.
TRD package contents are placed in the following directory structure. The user needs to copy all the files from the $TRD_HOME/images/vcu_audio/ to FAT32 formatted SD card directory.
└── rdf0428-zcu106-vcu-trd-2019-2 ├── apu │ └── vcu_petalinux_bsp ├── images │ ├── vcu_10g │ ├── vcu_audio │ ├── vcu_hdmi_multistream_xv20 │ ├── vcu_hdmi_rx │ ├── vcu_hdmi_tx │ ├── vcu_llp2_hdmi_nv12 │ ├── vcu_llp2_hdmi_nv16 │ ├── vcu_llp2_hdmi_xv20 │ ├── vcu_llp2_sdi_xv20 │ ├── vcu_multistream_nv12 │ ├── vcu_pcie │ ├── vcu_sdirx │ ├── vcu_sditx │ └── vcu_sdi_xv20 ├── pcie_host_package │ ├── COPYING │ ├── etc │ ├── include │ ├── libxdma │ ├── LICENSE │ ├── README.md │ ├── tools │ └── xdma ├── pl │ ├── constrs │ ├── designs │ ├── prebuild │ ├── README.md │ └── srcs └── README.txt
TRD package contents specific to Multistream Audio-Video design is placed in the following directory structure.
└── rdf0428-zcu106-vcu-trd-2019-2 ├── apu │ └── vcu_petalinux_bsp ├── images │ ├── vcu_audio │ │ ├── autostart.sh │ │ ├── bin │ │ ├── BOOT.BIN │ │ ├── config │ │ ├── image.ub │ │ ├── system.dtb │ │ └── vcu ├── pcie_host_package │ ├── COPYING │ ├── etc │ ├── include │ ├── libxdma │ ├── LICENSE │ ├── README.md │ ├── tools │ └── xdma ├── pl │ ├── constrs │ ├── designs │ ├── prebuild │ ├── README.md │ └── srcs └── README.txt
configuration files(input.cfg) for various Resolutions are placed in the following directory structure in /media/card.
config ├── 1-4kp60 │ ├── Display │ ├── Record │ ├── Stream-in │ └── Stream-out ├── 2-1080p60 │ ├── Display │ ├── Record │ ├── Stream-in │ └── Stream-out ├── 2-4kp30 │ ├── Display │ ├── Record │ ├── Stream-in │ └── Stream-out └── input.cfg
1.2.1 GStreamer Application (vcu_gst_app)
The vcu_gst_app is a command-line multi-threaded Linux application. The command-line application requires an input configuration file (input.cfg) to be provided in the plain text.
Execution of the application is shown below:
% vcu_gst_app <path to *.cfg file>
Example:
4kp60 HEVC_HIGH Display Pipeline execution
% vcu_gst_app /media/card/config/Single_4kp60_HDMI_HDMI/Display/Single_HDMI_HDMI_4kp60_HEVC_HIGH.cfg
4kp60 HEVC_HIGH Record Pipeline execution
% vcu_gst_app /media/card/config/Single_4kp60_HDMI_HDMI/Record/Single_HDMI_HDMI_4kp60_HEVC_HIGH.cfg
4kp60 HEVC_HIGH Stream-out Pipeline execution
% vcu_gst_app /media/card/config/Single_4kp60_HDMI_HDMI/Stream-out/Single_HDMI_HDMI_4kp60_HEVC_HIGH.cfg
4kp60 HEVC_HIGH Stream-in Pipeline execution
% vcu_gst_app /media/card/config/Single_4kp60_HDMI_HDMI/Stream-in/Single_HDMI_HDMI_4kp60_HEVC_HIGH.cfg
NOTE: Make sure HDMI-Rx should be configured to 4kp60 mode.
To measure the latency of the pipeline, run the below command. The latency data is huge, so dump it to a file.
% GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency;scheduletime" ./vcu_gst_app ./input.cfg >& dump_log.txt
Refer below link for detailed run flow steps
1.3 Build Flow
Refer below link for Build Flow
2 Other Information
2.1 Known Issues
- Block Noise is observed in AVC_MEDIUM and AVC_LOW in 4kp60 pipelines.
- The digilent PMOD card cannot support the passive source like MICROPHONES. Only active sources are to be connected. Here the source is from the Aux cable which is connected in between the source (laptop) and pmod card.
- For Petalinux related known issues please refer AR# 72950: PetaLinux 2019.2 - Product Update Release Notes and Known Issues.
- For VCU related known issues please refer AR# 66763: LogiCORE H.264/H.265 Video Codec Unit (VCU) - Release Notes and Known Issues and Xilinx Zynq UltraScale+ MPSoC Video Codec Unit.
2.2 Limitations
- For playback in DP, video input resolution should match to DP's native resolution. This constraint is due to the support of the GUI. In the GUI case if we allow video source other than native resolution(by setting fullscreen overlay) then the graphics layer will disappear. To recover back GUI user need to kill and relaunch the GUI app. To avoid such condition TRD only supports video input resolution which is equal to DP's native resolution.
- For Petalinux related limitations please refer AR# 72950: PetaLinux 2019.2 - Product Update Release Notes and Known Issues.
- For VCU related limitations please refer AR# 66763: LogiCORE H.264/H.265 Video Codec Unit (VCU) - Release Notes and Known Issues, Xilinx Zynq UltraScale+ MPSoC Video Codec Unit and PG252 link.
2.3 Optimum VCU Encoder parameters for use-cases:
Video streaming:
- Video streaming use-case requires a very stable bitrate graph for all pictures.
- It is good to avoid periodic large Intra pictures during the encoding session
- Low-latency rate control (hardware RC) is the preferred control-rate for video streaming, it tries to maintain equal amount frame sizes for all pictures.
- Good to avoid periodic Intra frames instead use low-delay-p (IPPPPP…)
- VBR is not a preferred mode of streaming.
Performance: AVC Encoder settings:
- It is preferred to use 8 or higher slices for better AVC encoder performance.
- AVC standard does not support Tile mode processing which results in the processing of MB rows sequentially for entropy coding.
Quality: Low bitrate AVC encoding:
- Enable profile=high and use qp-mode=auto for low-bitrate encoding use-cases.
- The high profile enables 8x8 transform which results in better video quality at low bitrates.
2.4 Audio-Video Synchronization
Clocks and synchronization in GStreamer
When playing complex media, each sound and video sample must be played in a specific order at a specific time. For this purpose, GStreamer provides a synchronization mechanism.
GStreamer provides support for the following use cases:
- Non-live sources with access faster than playback rate. This is the case where one is reading media from a file and playing it back in a synchronized fashion. In this case, multiple streams need to be synchronized, like audio, video and subtitles.
- Capture and synchronized muxing/mixing of media from multiple live sources. This is a typical use case where you record audio and video from a microphone/camera and mux it into a file for storage.
- Streaming from (slow) network streams with buffering. This is the typical web streaming case where you access content from a streaming server using HTTP.
- Capture from live source and playback with configurable latency. This is used, for example, when capturing from a camera, applying an effect and displaying the result. It is also used when streaming low latency content over a network with UDP.
- Simultaneous live capture and playback from prerecorded content. This is used in audio recording cases where you play a previously recorded audio and record new samples, the purpose is to have the new audio perfectly in sync with the previously recorded data.
GStreamer uses a GstClock object, buffer timestamps and a SEGMENT event to synchronize streams in a pipeline as we will see in the next sections.
See the GStreamer documenation for more information:
Clock running-time
In a typical computer, there are many sources that can be used as a time source, e.g., the system time, soundcards, CPU performance counters, etc. For this reason, GStreamer has many GstClock implementations available. Note that clock time doesn't have to start from 0 or any other known value. Some clocks start counting from a particular start date, others from the last reboot, etc.
A GstClock returns the absolute-time according to that clock with gst_clock_get_time (). The absolute-time (or clock time) of a clock is monotonically increasing.
A running-time is the difference between a previous snapshot of the absolute-time called the base-time, and any other absolute-time.
running-time = absolute-time - base-time
A GStreamer GstPipeline object maintains a GstClock object and a base-time when it goes to the PLAYING state. The pipeline gives a handle to the selected GstClock to each element in the pipeline along with selected base-time. The pipeline will select a base-time in such a way that the running-time reflects the total time spent in the PLAYING state. As a result, when the pipeline is PAUSED, the running-time stands still.
Because all objects in the pipeline have the same clock and base-time, they can thus all calculate the running-time according to the pipeline clock.
Buffer running-time
To calculate a buffer running-time, we need a buffer timestamp and the SEGMENT event that preceded the buffer. First we can convert the SEGMENT event into a GstSegment object and then we can use the gst_segment_to_running_time () function to perform the calculation of the buffer running-time.
Synchronization is now a matter of making sure that a buffer with a certain running-time is played when the clock reaches the same running-time. Usually, this task is performed by sink elements. These elements also have to take into account the configured pipeline's latency and add it to the buffer running-time before synchronizing to the pipeline clock.
Non-live sources timestamp buffers with a running-time starting from 0. After a flushing seek, they will produce buffers again from a running-time of 0.
Live sources need to timestamp buffers with a running-time matching the pipeline running-time when the first byte of the buffer was captured.
Buffer stream-time
The buffer stream-time, also known as the position in the stream, is a value between 0 and the total duration of the media and it's calculated from the buffer timestamps and the preceding SEGMENT event.
The stream-time is used in:
- Report the current position in the stream with the POSITION query.
- The position used in the seek events and queries.
- The position used to synchronize controlled values.
The stream-time is never used to synchronize streams, this is only done with the running-time.
Time overview
Here is an overview of the various timelines used in GStreamer.
The image below represents the different times in the pipeline when playing a 100ms sample and repeating the part between 50ms and 100ms.
You can see how the running-time of a buffer always increments monotonically along with the clock-time. Buffers are played when their running-time is equal to the clock-time - base-time. The stream-time represents the position in the stream and jumps backwards when repeating.
Clock providers
A clock provider is an element in the pipeline that can provide a GstClock object. The clock object needs to report an absolute-time that is monotonically increasing when the element is in the PLAYING state. It is allowed to pause the clock while the element is PAUSED.
Clock providers exist because they play back media at some rate, and this rate is not necessarily the same as the system clock rate. For example, a soundcard may play back at 44.1 kHz, but that doesn't mean that after exactly 1 second according to the system clock, the soundcard has played back 44100 samples. This is only true by approximation. In fact, the audio device has an internal clock based on the number of samples played that we can expose.
If an element with an internal clock needs to synchronize, it needs to estimate when a time according to the pipeline clock will take place according to the internal clock. To estimate this, it needs to slave its clock to the pipeline clock.
If the pipeline clock is exactly the internal clock of an element, the element can skip the slaving step and directly use the pipeline clock to schedule playback. This can be both faster and more accurate. Therefore, generally, elements with an internal clock like audio input or output devices will be a clock provider for the pipeline.
When the pipeline goes to the PLAYING state, it will go over all elements in the pipeline from sink to source and ask each element if they can provide a clock. The last element that can provide a clock will be used as the clock provider in the pipeline. This algorithm prefers a clock from an audio sink in a typical playback pipeline and a clock from source elements in a typical capture pipeline.
There exist some bus messages to let you know about the clock and clock providers in the pipeline. You can see what clock is selected in the pipeline by looking at the NEW_CLOCK message on the bus. When a clock provider is removed from the pipeline, a CLOCK_LOST message is posted and the application should go to PAUSED and back to PLAYING to select a new clock.
For more detail please refer: https://gstreamer.freedesktop.org/documentation/application-development/advanced/clocks.html?gi-language=c
3 Appendix A - Input Configuration File (input.cfg)
The example configuration files are stored at /media/card/config/ folder.
Common Configuration:
It is the starting point of common configuration.
Num of Input:
Provide the number of inputs. this is 1 for single stream and 2 in case of Multistream
Output:
Select the video interface.
Options: HDMI or DP
Out Type:
Options: display, record, and stream
Display Rate:
Pipeline frame rate.
Options: 30 FPS or 60 FPS for each stream.
Exit:
It indicates to the application that the configuration is over.
Input Configuration:
It is the starting point of the input configuration.
Input Num:
Starting Nth input configuration.
Options: 1-2
Input Type:
Input source type.
Options: HDMI, MIPI, File, Stream
Uri:
File path or Network URL. Applicable for file playback and stream-in pipeline only. Supported file formats for playback are ts and mkv.
Options: file:///media/usb/abc.ts (for file path), udp://192.168.25.89:5004/ (for Network streaming, Here 192.168.25.89 is IP address and 5004 is port no)
Raw:
To tell the pipeline is processed or pass-through.
Options: True, False
Width:
The width of the live source.
Options: 3840, 1920
Height:
The height of the live source.
Options: 2160, 1080
Enable SCD:
Stream-based SCD supports with HDMI input source only and must be enabled.
Options: True, False
Exit:
It indicates to the application that the configuration is over.
Encoder Configuration:
It is the starting point of encoder configuration.
Encoder Num:
Starting Nth encoder configuration.
Options: 1-2
Encoder Name:
Name of the encoder.
Options: AVC, HEVC
Profile:
Name of the profile.
Options: baseline, main or high for AVC. Main for HEVC.
Rate Control:
Rate control options.
Options: CBR, VBR, and low-latency.
Filler Data:
Filler Data NAL units for CBR rate control.
Options: True, False
QP:
QP control mode used by the VCU encoder.
Options: Uniform, Auto
L2 Cache:
Enable or Disable L2Cache buffer in encoding process.
Options: True, False
Latency Mode:
Encoder latency mode.
Options: normal, sub_frame
Low Bandwidth:
If enabled, decrease the vertical search range used for P-frame motion estimation to reduce the bandwidth.
Options: True, False
Gop Mode:
Group of Pictures mode.
Options: Basic, low_delay_p, low_delay_b
Bitrate:
Target bitrate in Kbps
Options: 1-60000
B Frames:
Number of B-frames between two consecutive P-frames
Options: 0-4
Slice:
The number of slices produced for each frame. Each slice contains one or more complete macroblock/CTU row(s). Slices are distributed over the frame as regularly as possible. If slice-size is defined as well more slices may be produced to fit the slice-size requirement.
Options:
4-22 4kp resolution with HEVC codec
4-32 4kp resolution with AVC codec
4-32 1080p resolution with HEVC codec
4-32 1080p resolution with AVC codec
GoP Length:
The distance between two consecutive I frames
Options: 1-1000
Format:
The format of input data.
Options: NV12
Preset:
Options: HEVC_HIGH, HEVC_MEDIUM, HEVC_LOW, AVC_HIGH, AVC_MEDIUM, AVC_LOW, Custom
Exit
It indicates to the application that the configuration is over.
Record Configuration:
It is the starting point of record configuration.
Record Num:
Starting Nth record configuration.
Options: 1-2
Out-File Name:
Record file path.
Options: /media/usb/abc.ts
Duration:
Duration in minutes.
Options: 1-3
Exit
It indicates to the application that the configuration is over.
Streaming Configuration:
It is the starting point of streaming configuration.
Streaming Num:
Starting Nth Streaming configuration.
Options: 1-2
Host IP:
The host to send the packets to
Options: 192.168.25.89 or Windows PC IP
Port:
The port to send the packets to
Options: 5004, 5006
Exit
It indicates to the application that the configuration is over.
It is the starting point of the audio configuration.
The format of the audio.
Options: S24_32LE.
Options: 48000.
The number of audio channels.
Options: 1-2
To set the volume level. The default value is 2.0.
Options: 0.0 - 10.0
It is the starting point of trace configuration.
FPS Info:
To display fps info on the console.
Options: True, False
APM Info:
To display APM counter number on the console.
Options: True, False
Pipeline Info:
To display pipeline info on console.
Options: True, False
Exit
It indicates to the application that the configuration is over.
Mount Locations:
The mount locations for various devices can be found in the below table.
The mount locations can vary. Users can use lsblk
or mount
to find the location of the mounted devices.
Below are some example mount points
Device | Mount Location |
---|---|
SD Card | /run/media/mmcblk0p2 |
Sata Drive USB Drive | /run/media/sda /run/media/usb |
RAM Disk | /run/media/ |
4 Appendix B
- Kill the Qt GUI application running on the target board by executing the below commands from the serial console.
$ killall -9 run_vcu.sh $ killall -9 vcu_qt $ killall -9 Xorg
- HDMI source can be locked to any resolution. Run the below command for all media nodes to print media device topology where "mediaX" represents different media nodes. In the topology, log look for the “v_hdmi_rx_ss” string to identify the HDMI input source media node.
$ xmedia-ctl -p -d /dev/mediaX
When HDMI source is connected to 1080P60 resolution, it shows:
root@zcu106_vcu_trd:/media/card# xmedia-ctl -p -d /dev/media1 Media controller API version 4.19.0 Media device information ------------------------ driver xilinx-video model Xilinx Video Composite Device serial bus info hw revision 0x0 driver version 4.19.0 Device topology - entity 1: vcap_hdmi output 0 (1 pad, 1 link) type Node subtype V4L flags 0 device node name /dev/video0 -----> Video node for HDMI Rx source pad0: Sink <- "xlnx-scdchan.0":1 [ENABLED] - entity 5: xlnx-scdchan.0 (2 pads, 2 links) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev6 pad0: Sink [fmt:VYYUYY8_1X24/3840x2160 field:none] <- "a0080000.v_proc_ss":1 [ENABLED] pad1: Source [fmt:VYYUYY8_1X24/3840x2160 field:none] -> "vcap_hdmi output 0":0 [ENABLED] - entity 8: a0080000.v_proc_ss (2 pads, 2 links) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev7 pad0: Sink [fmt:RBG888_1X24/3840x2160 field:none] <- "a0000000.v_hdmi_rx_ss":0 [ENABLED] pad1: Source [fmt:VYYUYY8_1X24/3840x2160 field:none] -> "xlnx-scdchan.0":0 [ENABLED] - entity 11: a0000000.v_hdmi_rx_ss (1 pad, 1 link) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev8 pad0: Source [fmt:RBG888_1X24/1920x1080 field:none colorspace:srgb] [dv.caps:BT.656/1120 min:0x0@25000000 max:4096x2160@297000000 stds:CEA-861,DMT,CVT,GTF caps:progressive,reduced-blanking,custom] [dv.detect:BT.656/1120 1920x1080p60 (2200x1125) stds:CEA-861 flags:CE-video] -> "a0080000.v_proc_ss":0 [ENABLED]
NOTE: Check resolution and frame-rate of "dv.detect" under "v_hdmi_rx_ss" node.
When the HDMI source is not connected, it shows:
root@zcu106_vcu_trd:/media/card# xmedia-ctl -p -d /dev/media1 Media controller API version 4.19.0 Media device information ------------------------ driver xilinx-video model Xilinx Video Composite Device serial bus info hw revision 0x0 driver version 4.19.0 Device topology - entity 1: vcap_hdmi output 0 (1 pad, 1 link) type Node subtype V4L flags 0 device node name /dev/video0 -----> Video node for HDMI Rx source pad0: Sink <- "xlnx-scdchan.0":1 [ENABLED] - entity 5: xlnx-scdchan.0 (2 pads, 2 links) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev6 pad0: Sink [fmt:VYYUYY8_1X24/3840x2160 field:none] <- "a0080000.v_proc_ss":1 [ENABLED] pad1: Source [fmt:VYYUYY8_1X24/3840x2160 field:none] -> "vcap_hdmi output 0":0 [ENABLED] - entity 8: a0080000.v_proc_ss (2 pads, 2 links) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev7 pad0: Sink [fmt:VYYUYY8_1X24/1280x720 field:none colorspace:srgb] <- "a0000000.v_hdmi_rx_ss":0 [ENABLED] pad1: Source [fmt:VYYUYY8_1X24/1920x1080 field:none colorspace:srgb] -> "xlnx-scdchan.0":0 [ENABLED] - entity 11: a0000000.v_hdmi_rx_ss (1 pad, 1 link) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev8 pad0: Source [fmt:RBG888_1X24/3840x2160 field:none colorspace:srgb] [dv.caps:BT.656/1120 min:0x0@25000000 max:4096x2160@297000000 stds:CEA-861,DMT,CVT,GTF caps:progressive,reduced-blanking,custom] [dv.query:no-link] -----> HDMI Rx Link Status -> "a0080000.v_proc_ss":0 [ENABLED]
NOTE: Here "dv.query:no-link" under "v_hdmi_rx_ss" node shows HDMI-Rx source is not connected or HDMI-Rx source is not active(Try waking up the device by pressing a key on remote).
Notes to set the format of SCD channel in media1 node:
- Run the following command to check the current resolution of SCD nodes(here media1 have combined SCD node with Video0),
$ xmedia-ctl -p -d /dev/media1
- Make sure SCD media node resolution is set as per current pipeline resolution
Run the following command to change the resolution of SCD nodes(here media1 is combined with SCD media node and xlnx-scdchan.0 is SCD channel)
- For 4kp resolution
$ xmedia-ctl -d /dev/media1 -V "\"xlnx-scdchan.0\":0 [fmt:VYYUYY8_1X24/3840x2160 field:none]" $ xmedia-ctl -d /dev/media1 -V "\"xlnx-scdchan.0\":1 [fmt:VYYUYY8_1X24/3840x2160 field:none]"
- For 1080p resolution
$ xmedia-ctl -d /dev/media1 -V "\"xlnx-scdchan.0\":0 [fmt:VYYUYY8_1X24/1920x1080 field:none]" $ xmedia-ctl -d /dev/media1 -V "\"xlnx-scdchan.0\":1 [fmt:VYYUYY8_1X24/1920x1080 field:none]"
- Follow the below steps to switch the HDMI-Rx resolution from 1080p60 to 4kp60.
- Check current HDMI Input Source Resolution (1080p60) by following the above-mentioned steps.
- Run vcu_gst_app for current HDMI resolution (1080p60) by executing the following command.
$ vcu_gst_app /media/card/config/input.cfg
Below configurations needs to be set in input.cfg for HDMI-1080p60.
Common Configuration : START Num Of Input : 1 Output : HDMI Out Type : Display Frame Rate : 60 Exit Input Configuration : START Input Num : 1 Input Type : hdmi Raw : TRUE Width : 1920 Height : 1080 Enable SCD : TRUE Exit
- Change Resolution of HDMI Input Source from 1080p60 to 4kp60 by following the below steps.
- Set the HDMI source resolution to 4kp60 (Home page → settings → display & Sound → Resolution → change to 4kp60).
- Save the configuration to take place the change.
- Verify the desired HDMI Input Source Resolution (4kp60) by following the above-mentioned steps.
- Change Resolution of HDMI Input Source from 1080p60 to 4kp60 by following the below steps.
- If HDMI Tx link-up issue is observed after Linux booting, use the following command:
$ modetest -D a0070000.v_mix -s 40:3840x2160-60@AR24 -w 36:"alpha":0
- Run the following gst-launch-1.0 command to capture and play raw HDMI video and Audio using the GStreamer pipeline.
$ gst-launch-1.0 v4l2src device=/dev/video0 io-mode=4 ! video/x-raw, width=3840, height=2160, framerate=60/1, format=NV12 ! queue max-size-bytes=0 ! kmssink bus-id="a0070000.v_mix" alsasrc device=hw:2,1 ! audio/x-raw, rate=48000, channels=2, format=S24_32LE ! audioconvert ! audioresample ! audio/x-raw, rate=48000, channels=2, format=S24_32LE ! queue ! alsasink device="hw:2,0"
- Run the following gst-launch-1.0 command to capture and play processed(capture → encode → decode → display) HDMI video and raw HDMI Audio using the GStreamer pipeline.
$ gst-launch-1.0 v4l2src device=/dev/video0 io-mode=4 ! video/x-raw, width=3840, height=2160, framerate=60/1, format=NV12 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=60000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 ! video/x-h265, profile=main, alignment=au ! queue ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! kmssink bus-id="a0070000.v_mix" alsasrc device=hw:2,1 ! audio/x-raw, rate=48000, channels=2, format=S24_32LE ! audioconvert ! audioresample ! audio/x-raw, rate=48000, channels=2, format=S24_32LE ! queue ! alsasink device="hw:2,0"
- Run the following gst-launch-1.0 command to record HDMI video and audio using the GStreamer pipeline.
$ gst-launch-1.0 v4l2src device=/dev/video0 io-mode=4 num-buffers=3600 ! video/x-raw, format=NV12, width=3840, height=2160, framerate=60/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=60000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 ! video/x-h265, profile=main, alignment=au ! queue ! mux. alsasrc device=hw:2,1 num-buffers=3600 ! audio/x-raw, format=S24_32LE, rate=48000, channels=2 ! queue ! audioconvert ! audioresample ! faac ! aacparse ! mpegtsmux name=mux ! filesink location = "/media/usb/test.ts"
NOTE: File location should be USB-3.0 to avoid the read-write bandwidth issue.
- Run the following gst-launch-1.0 command to play the recorded file using the GStreamer pipeline.
$ gst-launch-1.0 uridecodebin uri="file:///media/usb/test.ts" name=decode ! queue max-size-bytes=0 ! kmssink bus-id="a0070000.v_mix" decode. ! audioconvert ! audioresample ! audio/x-raw, rate=48000, channnels=2, format=S24_32LE ! queue ! alsasink device="hw:2,0"
NOTE: File location should be USB-3.0 to avoid the read-write bandwidth issue.
- Run the following gst-launch-1.0 command to stream-out HDMI video and audio using the GStreamer pipeline.
$ gst-launch-1.0 v4l2src device=/dev/video0 io-mode=4 ! video/x-raw, format=NV12, width=3840, height=2160, framerate=60/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=60000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! queue ! mux. alsasrc device=hw:2,1 ! audio/x-raw, format=S24_32LE, rate=48000, channels=2 ! queue ! audioconvert ! audioresample ! faac ! aacparse ! mpegtsmux name=mux ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5004
NOTE: Here 192.168.25.89 is host/client IP address and 5004 is port no.
- Run the following gst-launch-1.0 command to play stream-in video and audio using the Gstreamer pipeline where 5004 is port no.
$ gst-launch-1.0 udpsrc port=5004 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux name=demux demux. ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! kmssink bus-id="a0070000.v_mix" demux. ! queue ! aacparse ! faad ! audioconvert ! audioresample ! audio/x-raw, rate=48000, channnels=2, format=S24_32LE ! alsasink device="hw:2,0"
GST LAUNCH COMMANDS FOR MIPI video, I2S Audio:
- Run the following gst-launch-1.0 command to capture and play raw MIPI video and I2S Audio using the GStreamer pipeline.
$ gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 ! video/x-raw, width=3840, height=2160, framerate=60/1, format=NV12 ! queue max-size-bytes=0 ! kmssink bus-id="a0070000.v_mix" alsasrc device=hw:2,1 ! audio/x-raw, rate=48000, channels=2, format=S24_32LE ! audioconvert ! audioresample ! audio/x-raw, rate=48000, channels=2, format=S24_32LE ! queue ! alsasink device="hw:0,0"
- Run the following gst-launch-1.0 command to capture and play processed(capture → encode → decode → display) MIPI video and raw I2S Audio using the GStreamer pipeline.
$ gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 ! video/x-raw, width=3840, height=2160, framerate=60/1, format=NV12 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=60000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 ! video/x-h265, profile=main, alignment=au ! queue ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! kmssink bus-id="a0070000.v_mix" alsasrc device=hw:2,1 ! audio/x-raw, rate=48000, channels=2, format=S24_32LE ! audioconvert ! audioresample ! audio/x-raw, rate=48000, channels=2, format=S24_32LE ! queue ! alsasink device="hw:0,0"
- Run the following gst-launch-1.0 command to record MIPI video and I2S audio using the GStreamer pipeline.
$ gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 num-buffers=3600 ! video/x-raw, format=NV12, width=3840, height=2160, framerate=60/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=60000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 ! queue ! video/x-h265, profile=main, alignment=au ! mux. alsasrc device=hw:0,1 num-buffers=3600 ! audio/x-raw, format=S24_32LE, rate=48000, channels=2 ! queue ! audioconvert ! audioresample ! faac ! mpegtsmux name=mux ! filesink location = "/media/usb/test.ts"
NOTE: File location should be USB-3.0 to avoid the read-write bandwidth issue.
- Run the following gst-launch-1.0 command to play the recorded file using the GStreamer pipeline.
$ gst-launch-1.0 uridecodebin uri="file:///media/usb/test.ts" name=decode ! queue max-size-bytes=0 ! kmssink bus-id="a0070000.v_mix" decode. ! audioconvert ! audioresample ! audio/x-raw, rate=48000, channnels=2, format=S24_32LE ! queue ! alsasink device="hw:0,0"
NOTE: File location should be USB-3.0 to avoid the read-write bandwidth issue.
- Run the following gst-launch-1.0 command to stream-out MIPI video and I2S audio using the GStreamer pipeline.
$ gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 ! video/x-raw, format=NV12, width=3840, height=2160, framerate=60/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=60000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! queue ! mux. alsasrc device=hw:0,1 ! audio/x-raw, format=S24_32LE, rate=48000, channels=2 ! queue ! audioconvert ! audioresample ! faac ! aacparse ! mpegtsmux name=mux ! rtpmp2tpay ! udpsink host=192.168.25.89 port=5004
NOTE: Here 192.168.25.89 is host/client IP address and 5004 is port no.
- Run the following gst-launch-1.0 command to play stream-in video and audio using the Gstreamer pipeline where 5004 is port no.
$ gst-launch-1.0 udpsrc port=5004 buffer-size=60000000 caps="application/x-rtp, clock-rate=90000" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsparse ! video/mpegts ! tsdemux name=demux demux. ! queue ! h265parse ! video/x-h265, profile=main, alignment=au ! omxh265dec internal-entropy-buffers=5 low-latency=0 ! queue max-size-bytes=0 ! kmssink bus-id="a0070000.v_mix" demux. ! queue ! aacparse ! faad ! audioconvert ! audioresample ! audio/x-raw, rate=48000, channnels=2, format=S24_32LE ! alsasink device="hw:0,0"
Notes for gst-launch-1.0 commands:
- Make sure the HDMI-Rx media pipeline is configured for 4kp60 resolution and source/sink has the same color format. Run below xmedia-ctl commands to set the resolution and format of the HDMI scaler node.
When HDMI Input Source is NVIDIA SHIELD
$ xmedia-ctl -d /dev/media1 -V "\"a0080000.v_proc_ss\":0 [fmt:RBG888_1X24/3840x2160 field:none]" $ xmedia-ctl -d /dev/media1 -V "\"a0080000.v_proc_ss\":1 [fmt:VYYUYY8_1X24/3840x2160 field:none]"
NOTE: Make sure NVIDIA SHIELD is configured for 4kp resolution and RGB888 color format.
When HDMI Input Source is ABOX
$ xmedia-ctl -d /dev/media1 -V "\"a0080000.v_proc_ss\":0 [fmt:VYYUYY8_1X24/3840x2160 field:none]" $ xmedia-ctl -d /dev/media1 -V "\"a0080000.v_proc_ss\":1 [fmt:VYYUYY8_1X24/3840x2160 field:none]"
NOTE: Make sure ABOX is configured for 4kp resolution and VYYUYY8 color format.
© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy