reVISION Getting Started Guide 2017.4 rev2

Table of Contents

1 Revision History
2 Introduction
3 Overview
3.1 Platform
3.2 Design Examples
4 Software Tools and System Requirements
4.1 Hardware
4.2 Software
4.3 Licensing
4.4 Compatibility
5 Design File Hierarchy
6 Installation and Operating Instructions
6.1 Board Setup
6.2 Extract the design zip files
7 Tool Flow Tutorials
7.1 Build the Live_IO Optical Flow sample application
7.2 Build the Stereo, the Filter2D, and the Triple sample applications
7.3 Build the File IO sample applications
8 Run the Application
8.1 Run the live_IO sample applications
8.2 Gstreamer elements
8.3 Particularities about the Stereo demo
9 Platform Details
9.1 Vivado Hardware Design
9.2 PetaLinux BSP
9.3 Video Command Line Utility
10 Other Information
10.1 Known Issues
10.2 Limitations
11 Support
12 References

1 Revision History


This Getting Started Guide complements the 2017.4 rev2 version of the ZCU102 and ZCU104 reVISION platforms.
For other versions, refer to the reVISION Getting Started Guide overview page.

Change Log:
rev2

rev1



2 Introduction


The Xilinx reVISION stack includes a range of development resources for platform, algorithm and application development. This includes support for the most popular neural networks including AlexNet, GoogLeNet, VGG, SSD, and FCN. Additionally, the stack provides library elements including pre-defined and optimized implementations for CNN network layers, required to build custom neural networks (DNN/CNN). The machine learning elements are complemented by a broad set of acceleration ready OpenCV functions for computer vision processing. For application level development, Xilinx supports industry standard frameworks and libraries including Caffe for machine learning and OpenCV for computer vision. The reVISION stack also includes development platforms from Xilinx and third parties, including various types of sensors. For more information go to the Xilinx reVISION webpage.



3 Overview


The below figure shows a block diagram of the reVISION single sensor design:



3.1 Platform


The ZCU102/ZCU104 single-sensor reVISION platform supports the following video interfaces:

Sources:
Sinks:

3.2 Design Examples


File I/O:
These are the simplest design examples. Typically they will read a frame of video from a standard image file using a standard OpenCV call (such as cv::imread()), process that frame with a call to an xfopencv function, and output the result to a file, (e.g., using cv::imwrite()). They illustrate use of five different xfopencv HW accelerated versions of popular OpenCV functions.

Live I/O:
These examples input and output live video.




Below table shows the performance matrix of the live I/O samples on the supported platforms:

ZCU102
ZCU104
filter2d
2160p30
2160p30
optical_flow
2160p52
2160p30
stereo
1080p16
720p18
Note: Work to bring the performance on the ZCU104 up to par with the ZCU102 is ongoing.



4 Software Tools and System Requirements


4.1 Hardware


Required:

Optional (needed for live I/O examples):

4.2 Software


Required:

4.3 Licensing



Steps to generate the license:
  1. Log in here with your work E-mail address (If you do not yet have an account, follow the steps under Create Account)
  2. Generate a license from “Create New Licenses” by checking "SDSoC Environment, 60 Day Evaluation License"
  3. Under system information, give the host details.
  4. Proceed until you get the license agreement and accept it.
  5. The License (.lic file) will be sent to the email-id mentioned in the login details.
  6. Copy the license file locally and give the same path in the SDSoC license manager.

4.4 Compatibility


The reference design has been tested successfully with the following user-supplied components.

Monitors:
Make/Model
Native Resolution
Viewsonic VP2780-4K
3840x2160
Acer S277HK
3840x2160
Dell U2414H
1920x1080

HDMI Sources:
Make/Model
Resolutions
Nvidia Shield TV
3840x2160, 1920x1080
OTT TV BOX M8N
3840x2160, 1920x1080, 1280x720
Roku 2 XS
1920x1080, 1280x720
TVix Slim S1 Multimedia Player
1920x1080, 1280x720

USB3 Cameras:
Make/Model
Resolutions
ZED stereo camera
3840x1080, 2560x720
See3CAM_CU30
1920x1080, 1280x720

DisplayPort Cables:



5 Design File Hierarchy


The Zynq UltraScale+ MPSoC reVISION Platform zip file is released with the binary and source files required to create Xilinx SDx projects and build the sample applications. The sample applications are built as GStreamer plugins and test designs to exercise them. The provided samples include five file I/O examples and four live I/O examples. The file I/O examples read an input image file and produce an output image file whereas the live I/O examples take live video input from a video source and output live video on a display.

The zcu102_rv_ss.zip or zcu102_es2_rv_ss.zip or zcu104_rv_ss.zip zipfile is provided, containing the reVISION Platform. This is the directory structure:

zcu102_rv_ss (or zcu102_es2_rv_ss, or zcu104_rv_ss)
├── hw
│   └── zcu102_es2_rv_ss.dsa
├── petalinux_bsp
├── samples
│   ├── file_IO
│   │   ├── bilateral_fileio
│   │   ├── harris_fileio
│   │   ├── opticalflow_fileio
│   │   ├── steoreolbm_fileio
│   │   └── warptransform_fileio
│   └── live_IO
│       ├── filter2d
│       ├── optical_flow
│       ├── stereo
│       └── triple
├── sd_card
│   ├── filter2d
│   ├── optical_flow
│   ├── stereo
│   └── triple
├── sw
│   ├── a53_linux
│   │   ├── boot
│   │   ├── image
│   │   ├── inc
│   │   └── qemu
│   ├── prebuilt
│   ├── sysroot
│   └── zcu102_es2_rv_ss.spfm
├── workspaces
│   ├── ws_f2d
│   │   └── gst
│   │       ├── allocators
│   │       ├── apps
│   │       ├── base
│   │       └── plugins
│   ├── ws_of
│   │   └── gst
│   │       ├── allocators
│   │       ├── apps
│   │       ├── base
│   │       └── plugins
│   ├── ws_sv
│   │   └── gst
│   │       ├── allocators
│   │       ├── apps
│   │       ├── base
│   │       └── plugins
│   ├── ws_triple
│   │   └── gst
│   │       ├── allocators
│   │       ├── apps
│   │       ├── base
│   │       └── plugins
│   └── ws_video
│       ├── video_cmd
│       └── video_lib
└── zcu102_es2_rv_ss.xpfm



6 Installation and Operating Instructions


6.1 Board Setup


Required:



Optional:

ZCU102 Jumpers & Switches:


ZCU104 Jumpers & Switches:


6.2 Extract the design zip files


Download and unzip the reference design zip file matching your silicon version (see Section 4.2).

7 Tool Flow Tutorials


The SDx Development Environment, version 2017.4, must be installed and working on your host computer, either the Linux or the Windows version.

This guide will walk you through the process of building the sample designs. In step 6.2 above, you unzipped your platform files, and noted the exact directory paths.

The path to the extracted platform will be needed to tell SDx where your custom platform resides. You need to set the SYSROOT environment variable to point to a directory inside the platform. The platform root directory is abbreviated to <platform> below and needs to be replaced with your local path.
Linux: export SYSROOT=<platform>/sw/sysroot
Windows: Start->Control Panel->System->Advanced->Environment Variables Create environment variable SYSROOT with value <platform>\sw\sysroot
You can also set SYSROOT for all projects in your SDx Environment by opening the Menu 'Window' → 'Preferences' and adding 'sysroot' variable to 'C/C++' → 'Build' → 'Environment'.


The platform ships with five file IO and three live IO design examples demonstrating popular OpenCV functions accelerated on the programmable logic. A fourth live I/O example shows how to combine the other three live I/O designs into one design, allowing the three accelerated functions to reside and run in parallel in the FPGA.

With this release of reVISION the live IO sample design examples are based on GStreamer. See GStreamer The open source GStreamer framework code is included with the reVISION platform, and design examples are built as GStreamer plugins. Code for test applications is provided as well, allowing you to compile apps that will set up and run video pipelines using the plugins. Pipelines may be run using the gst-launch-1.0 utility, or by your own app compiled against the gstreamer libraries. An example test app called gstdemo is provided for each of the platform samples. The four sample <names> are filter2d, optical_flow, stereo, and triple. See the ./workspaces/<name>/gst/apps/<name>. directory for each sample.

A GStreamer plugin is a shared library. In the case of the reVISION sample designs, the GStreamer plugin consists of two linked parts. These "top" and "bottom" parts are separate shared libraries produced by separate project builds. The top part is the GStreamer plugin itself, containing the code for interfacing with the GStreamer framework. See the ./workspaces/<name>/gst/plugins/<name> directory.
The top part links with the bottom part which contains the code for the HW accelerated function(s). This bottom project generates the BOOT.BIN file containing the programmable logic used for the HW function(s). These are SDx projects: See the ./samples/live_IO/<name> directory.

7.1 Build the Live_IO Optical Flow sample application


The following steps are virtually identical whether you are running the Linux or Windows version of SDx.

There is a ./workspaces/... folder structure already set up for the four live_IO samples as part of the platform :
├── workspaces
│   ├── ws_f2d
│   ├── ws_of
│   ├── ws_sv
│   ├── ws_triple

You should copy these workspaces to the directory where you want to work. Look at the optical_flow workspace area supplied with the platform. All files under ./gst/ are supplied exactly as shown. The ./opticalflow directory is the SDx project you will create to build the low level accelerator code - note that you'll create this 'opticalflow' SDx project directly under the ws_of workspace. Note that ./gst/ is also directly under ./ws_of :
├── ws_of
│   ├── gst
│   │   ├── allocators
│   │   │   ├── gstsdxallocator.c
│   │   │   └── gstsdxallocator.h
│   │   ├── apps
│   │   │   └── optical_flow
│   │   │       └── main.c
│   │   ├── base
│   │   │   ├── gstsdxbase.c
│   │   │   └── gstsdxbase.h
│   │   └── plugins
│   │       └── optical_flow
│   │          ├── gstsdxopticalflow.cpp
│   │          └── gstsdxopticalflow.h
│   └── opticalflow
│       └── src
│           ├── optical_flow_sds.cpp
│           └── optical_flow_sds.h

For a given workspace, such as ./ws_of/, the arrangement of these subdirectories must be preserved. This is because the various projects depend on each other in that they need to know the paths to each other's include files and library files. As long as you keep this structure, you're OK - i.e. you may copy the ./ws_of/ tree with everything just as shown, and put it anywhere you want to work.

If you are working on Linux, there is no restriction on where you put these workspaces. Some people may want to work directly in the ./workspaces/ directory under the platform itself, and others may want to copy it elsewhere so that the original area remains untouched.

If you are working on Windows there is a restriction, i.e. file path lengths are restricted to 256 characters. The Xilinx build process creates some very deep directory structures with long names as it goes through the build process. You are advised, therefore, to keep the path to the workspace as short as possible. E.g. C:\ws_of\...














































7.2 Build the Stereo, the Filter2D, and the Triple sample applications



7.3 Build the File IO sample applications

















sigma_color: 7.72211 sigma_space: 0.901059 elapsed time 9133271 Minimum error in intensity = 0 Maximum error in intensity = 1 Percentage of pixels above error threshold = 0.00168789 Count: 35



8 Run the Application


To use the GStreamer plugins, a video pipeline that includes them must be set up and launched. The command line utility gst-launch-1.0 may be used to do this. Use your laptop connected to the target board over a serial terminal emulator, interacting with the system via a standard Linux console session. See section 6.1. You may construct video pipeline graphs consisting of one or more sources, zero, one or more accelerators, and one sink. GStreamer is responsible for initializing the capture, memory-to-memory, and display pipelines as well as managing the video buffer flow through the pipeline stages.

The gst-launch utility is really a debugging tool. The other way to set up and launch your plugins is with a compiled application that sets up and runs the pipeline using API calls to the GStreamer libraries. The sample code is provided for test apps that do this. See the ./gst/apps/<name> folder for each of the live_IO samples.

The HDMI and MIPI input channels are themselves hardware pipelines that must be configured. This task is done by the video_cmd utility, run once before starting up a pipeline that uses that video input. The video_cmd utility is present on the sd_card directory. It is needed only when the MIPI or HDMI input channels are used.

8.1 Run the live_IO sample applications



filter2d case

After building the "bottom" library, your sd_card directory will contain these files:
The "top" projects generate shared libraries and the demo app that you will need to copy into this ./sd_card directory.

opticalflow case

After building the "bottom" library, your sd_card directory will contain these files:
The "top" projects generate shared libraries and the demo app that you will need to copy into this ./sd_card directory.

stereo case

After building the "bottom" library, your sd_card directory will contain these files:
The "top" projects generate shared libraries and the demo app that you will need to copy into this ./sd_card directory.

triple case

After building the "bottom" library, your sd_card directory will contain these files:
The "top" projects generate shared libraries and the demo app that you will need to copy into this ./sd_card directory.


# cd /media/card
# cp libfilter2d.so /usr/lib
# cp libgstsdxfilter2d.so /usr/lib/gstreamer-1.0
# cp libgstsdxbase.so /usr/lib/gstreamer-1.0
# cp libgstsdxallocator.so /usr/lib/gstreamer-1.0
# cp libopticalflow.so /usr/lib
# cp libgstsdxopticalflow.so /usr/lib/gstreamer-1.0
# cp libgstsdxbase.so /usr/lib/gstreamer-1.0
# cp libgstsdxallocator.so /usr/lib/gstreamer-1.0
# cp libstereo.so /usr/lib
# cp libgstsdxstereo.so /usr/lib/gstreamer-1.0
# cp libgstsdxbase.so /usr/lib/gstreamer-1.0
# cp libgstsdxallocator.so /usr/lib/gstreamer-1.0
# cp libtriple.so /usr/lib
# cp libgstsdxfilter2d.so /usr/lib/gstreamer-1.0
# cp libgstsdxopticalflow.so /usr/lib/gstreamer-1.0
# cp libgstsdxstereo.so /usr/lib/gstreamer-1.0
# cp libgstsdxbase.so /usr/lib/gstreamer-1.0
# cp libgstsdxallocator.so /usr/lib/gstreamer-1.0

Media pipeline initialization

Use the video_cmd utility to list the available video sources and to configure the media pipeline. This needs to be done before running the gstreamer demo app or the gst-launch utility.
# video_cmd -S
    VIDEO SOURCE        ID         VIDEO DEVNODE
    MIPI CSI2 Rx        0       /dev/video3
      HDMI Input        1       /dev/video2
      USB Webcam        2       /dev/video4
Virtual Video De        3       /dev/video0

The MIPI, HDMI and vivid video sources support the YUY2 and UYVY pixel formats. For USB, the supported pixel format depends on the camera firmware e.g. the e-con USB camera only supports UYVY whereas the ZED stereo camera supports only YUYV (which is identical with YUY2). Make sure you set the input parameters correctly when configuring the media pipeline.

# video_cmd -s 0 -i 1920x1080@YUY2 -X

# video_cmd -s 1 -i 1920x1080@UYVY -X

Display controller initialization

Use the video_cmd utility to initialize the display controller. The command should be run in the background as the display mode will otherwise be reset to its original value after video_cmd exits. This needs to be done before running the gstreamer demo app or the gst-launch utility.

# video_cmd -d 0 -o 1920x1080 &&

# video_cmd -d 1 -o 1920x1080 &&

Gstreamer application

To create and run the gstreamer pipeline, you can either use the gst demo applications that are compiled from source or you can use the prebuilt gst-launch utility.

# ./gstdemo

gst-launch-1.0 \
    v4l2src device=/dev/video3 io-mode=dmabuf ! \
    "video/x-raw, width=1920, height=1080, format=YUY2" ! \
    sdxfilter2d filter-preset=4 filter-mode=1 ! queue ! \
    kmssink driver-name=xilinx_drm_mixer plane-id=26 sync=false

gst-launch-1.0 \
    v4l2src device=/dev/video2 io-mode=dmabuf ! \
    "video/x-raw, width=1920, height=1080, format=YUY2" ! \
    sdxopticalflow filter-mode=1 ! queue ! \
    kmssink driver-name=xilinx_drm_mixer plane-id=26 sync=false

gst-launch-1.0 \
    v4l2src device=/dev/video4 io-mode=dmabuf ! \
    "video/x-raw, width=3840, height=1080, format=YUY2" ! \
    sdxstereo filter-mode=1 config-filename=/media/card/SN12263.conf ! queue ! \
    kmssink driver-name=xilinx_drm_mixer plane-id=26 sync=false

gst-launch-1.0 \
    v4l2src device=/dev/video3 io-mode=dmabuf ! \
    "video/x-raw, width=1920, height=1080, format=YUY2" ! \
    sdxfilter2d filter-preset=4 filter-mode=1 ! queue ! \
    fpsdisplaysink video-sink="kmssink driver-name=xilinx_drm_mixer plane-id=26" sync=false text-overlay=false -v

8.2 Gstreamer elements


These pipelines are using the elements v4l2src, sdxfilter2d (or sdxopticalflow, or sdxstereo), queue, and kmssink. You may display properties and other info about any of these elements using the gstreamer utility gst-inspect-1.0.

v4l2src

# gst-inspect-1.0 v4l2src


queue

This is not strictly necessary, but using it will give better performance - i.e. the highest possible frame rate.

kmssink

To inspect the kmssink plugin:
# gst-inspect-1.0 kmssink

sdx<accelerator>

To inspect the sdxfilter2d plugin:
# gst-inspect-1.0 sdxfilter2d

To inspect the sdxopticalflow plugin:
# gst-inspect-1.0 sdxopticalflow

To inspect the sdxstereo plugin:
# gst-inspect-1.0 sdxstereo

8.3 Particularities about the Stereo demo


The stereo vision demo is special in several ways. First, you MUST use the ZED stereo camera connected to the USB video input. Second, and particular to this app, the width of the input image resolution is twice the width of the output resolution. The input consists of two images side-by-side, the synchronized left and right stereo input supplied by the camera. Two cases are possible: 2560x720 in to 1280x720 out, and 3840x1080 in to 1920x1080 out. The default 3840x2160 output resolution is not supported by the Stereo Vision app.

The other special thing about this app is that a configuration file must be used that corresponds to the camera you have connected to your system . Each StereoLabs ZED camera has a unique parameters file associated with it. This text file comes from StereoLabs, and must be present on the SD Card for the Stereo Vision demo to work properly. You need the file unique to your camera, identified by its Serial Number (found on the ZED camera box and also on a black tag near the USB plug of the ZED camera itself). This number will be, e.g., S/N 000012345. The parameter file for that camera would be named SN12345.conf. To download your parameter file, enter this URL into your browser:
http://calib.stereolabs.com/?SN=12345 (using your serial number in place of 12345)
This will download your configuration file to your computer.

The stereo block-matching algorithm calculates depth based on binocular parallax, similar to the way human eyes perceive depth. The depth map is coded in false colors. Objects far away appear deep blue. Closer and closer objects appear in rainbow succession green, yellow, orange, red, purple and finally white at about two feet from the camera in the 720p case, and about five feet away in the 1080p case. Any object closer than that cannot be tracked, and smooth areas with no texture in the image cannot be tracked, and show up as black. Areas with a lot of detail (especially with lots of vertical edges) are tracked best. It is normal that a large area on the left is black - this is 128 pixels wide, representing the range of the horizontal search for best match between the right and left binocular images.

9 Platform Details


9.1 Vivado Hardware Design


The Vivado hardware design is packaged inside the DSA located at zcu10[2|4]_[es2_]rv_ss/hw/zcu102_[es2_]rv_ss.dsa. The DSA also includes the hpfm file that describes the available AXI interfaces, clocks, resets, and interrupts. To open the hardware design in Vivado, run the following command from the tcl console:
% open_dsa zcu10[2|4]_[es2_]rv_ss/hw/zcu10[2|4]_[es2_]rv_ss.dsa

9.2 PetaLinux BSP


The PetaLinux BSP is located at zcu10[2|4]_[es2_]rv_ss/sw/petalinux_bsp. The hdf file exported from the corresponding Vivado project (see 9.1) is available in the project-spec/hw-description/ subfolder inside to the PetaLinux BSP. To configure and build the PetaLinux BSP, run the following commands:
% petalinux-config --oldconfig
% petalinux-build
The generated output products are located inside the images/linux/ subfolder. The relevant files that are packaged as part of the platform are

The generated sysroot is located at build/tmp/sysroots/plnx_aarch64.
Note: The tmp directory might relocated to a different folder especially if your petalinux project is located on a NFS mount. Please check your petalinux configuration.

9.3 Video Command Line Utility


The Xilinx video_cmd utility is used to initialize the media pipeline of an associated V4L2 capture device. A prebuilt version of this utility is available at zcu10[2|4]_[es2_]rv_ss/sw/a53_linux/image/video_cmd and will be automatically placed in the sd_card folder of the generated SDx project.

The video_cmd and video_lib sources are provided as XSDK projects and are located at zcu10[2|4]_[es2_]rv_ss/workspaces/ws_video. Perform the following steps to build the application using the SDx GUI:



10 Other Information


10.1 Known Issues



10.2 Limitations





11 Support


To obtain technical support for this reference design, go to the:



12 References


Additional material that is not hosted on the wiki: