Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

This page is part of an early access evaluation of the Certified Ubuntu 20.04 LTS for Xilinx Devices. Public access is coming soon.

This page provides instructions for building the Vitis-AI Library (v1.3.2) sample applications from source on a ZCU10x evaluation board running Certified Ubuntu 20.04 LTS for Xilinx Devices.

Table of Contents


Building the Vitis AI library sample applications is straightforward. The steps below provide an easy-to-follow set of steps to build them yourself.

Install the Necessary Packages

Before building the sample applications, install the required dependency packages:

$ sudo apt -y update
$ sudo apt -y install libopencv-dev
$ sudo apt -y install libgoogle-glog-dev

Get the Source Code

All of the source code is available on the Xilinx GitHub. After you clone the Vitis AI repository, switch to the v1.3.2 tag:

$ git clone
$ cd Vitis-AI
$ git checkout tags/v1.3.2

Build the Application Code

Depending on which sample application you want to build, switch to the sample application directory and build it using the provided script

In releases of the repository prior to version 1.4, the paths for opencv4 in the script are incorrect. This will be fixed in Vitis AI 1.4.
For sample applications other than facedetect, be sure to update the script to include the necessary opencv4 include path before building. This can be done quickly with the following command:

$ sed -i 's/-std=c++17/-std=c++17 -I\/usr\/include\/opencv4/g'

For example, in order to build the facedetect sample application:

$ cd demo/Vitis-AI-Library/samples/facedetect
$ ./

Download a Model

The sample applications require a compatible machine learning model in order to run. Refer to the readme file in the application directory to determine which models are compatible with each sample application.

For example, to download the densebox_640_360 model and extract in the home directory:

$ wget -O ~/densebox_640_360-zcu102_zcu104-r1.3.1.tar.gz
$ tar -xzf ~/densebox_640_360-zcu102_zcu104-r1.3.1.tar.gz -C ~

Run the Sample Application

The sample applications can take input from either a jpeg image or a USB Camera. HDMI input is not supported by the sample apps.

For help determining which /dev/videoX interface to use for the USB tests, you can use the following command:

v4l2-ctl --list-devices

# Use a jpeg image   as input (ex: image.jpeg, result will be in image_result.jpeg)
$ ./test_jpeg_facedetect  ~/densebox_640_360/densebox_640_360.xmodel image.jpeg
# Use USB camera as input (2 is the /dev/videoX index that the camera is detected at)
$ ./test_video_facedetect ~/densebox_640_360/densebox_640_360.xmodel 2

#  Run the accuracy test - Input is a file with a list of images, results are in results.txt
$ ./test_accuracy_facedetect  ~/densebox_640_360/densebox_640_360.xmodel file_list.txt results.txt

# Run the performance test - Input is a file with a list of images,  -t: Number of threads, -s: Number of seconds to run
$ ./test_performance_facedetect  ~/densebox_640_360/densebox_640_360.xmodel file_list.txt -t 4 -s 10 

There are three ways to specify the model

  1. Place the model directory in /usr/share/vitis_ai_libray/models/ and use the model name (ex: densebox_640_360)

  2. Place the model directory in the current directory and use the model name

  3. Directly reference the .xmodel file (shown above)

For more information about the specific test applications supported for each sample, please refer to the readme in the demo/Vitis-AI-Library/samples/<sample app> directory

  • No labels