This page provides instructions for building the Vitis-AI Library (v1.3.2) sample applications from source on a ZCU10x evaluation board or KV260 kit running Certified Ubuntu 20.04 LTS for Xilinx Devices.

Table of Contents


Building the Vitis AI library sample applications is straightforward. The steps below provide an easy-to-follow set of steps to build them yourself.

Install the Necessary Packages

Before building the sample applications, install the required dependency packages:

$ sudo apt -y update
$ sudo apt -y install libopencv-dev
$ sudo apt -y install libgoogle-glog-dev

Get the Source Code

All of the source code is available on the Xilinx GitHub. After you clone the Vitis AI repository, switch to the v1.3.2 tag. Make sure you’re in your home directory before cloning.

$ cd ~
$ git clone
$ cd Vitis-AI
$ git checkout tags/v1.3.2

Build the Application Code

Depending on which sample application you want to build, switch to the sample application directory and build it using the provided script

In releases of the repository prior to version 1.4, the paths for opencv4 in the script are incorrect. This will be fixed in Vitis AI 1.4.
For sample applications other than facedetect, be sure to update the script to include the necessary opencv4 include path before building. This can be done quickly with the following command:

$ sed -i 's/-std=c++17/-std=c++17 -I\/usr\/include\/opencv4/g'

For example, in order to build the facedetect sample application:

$ cd demo/Vitis-AI-Library/samples/facedetect
$ ./

Download a Model

The sample applications require a compatible machine learning model in order to run. Refer to the readme file in the application directory to determine which models are compatible with each sample application. The models are available for download at as part of the Xilinx Model Zoo. To find download link for the model you need, use the table below:


Model filename



<model name>-zcu102_zcu104-r1.3.1.tar.gz

Compatible with the OOB bitstreams provided with the Certified Ubuntu Images


<model name>-DPUCZDX8G_ISA0_B3136_MAX_BG2-1.3.1-r241.tar.gz

Compatible with Accelerated Application firmware from the 2020.2.2 KV260 Release

The models for the KV260 are not documented in the Xilinx Model Zoo since they were not available at the time that the original set of v1.3.1 models was released.

For example, to download the densebox_640_360 model for the zcu102 and extract in the home directory:

$ wget -O ~/densebox_640_360-zcu102_zcu104-r1.3.1.tar.gz
$ tar -xzf ~/densebox_640_360-zcu102_zcu104-r1.3.1.tar.gz -C ~

Run the Sample Application

The sample applications can take input from either a jpeg image or a USB Camera. HDMI input is not supported by the sample apps.

For help determining which /dev/videoX interface to use for the USB tests, you can use the following command:

v4l2-ctl --list-devices

# Use a jpeg image   as input (ex: image.jpeg, result will be in image_result.jpeg)
$ ./test_jpeg_facedetect  ~/densebox_640_360/densebox_640_360.xmodel image.jpeg
# Use USB camera as input (2 is the /dev/videoX index that the camera is detected at)
$ ./test_video_facedetect ~/densebox_640_360/densebox_640_360.xmodel 2

# Use video file as input 
./test_video_facedetect densebox_640_360/densebox_640_360.xmodel video_input.webm -t 8

#  Run the accuracy test - Input is a file with a list of images, results are in results.txt
$ ./test_accuracy_facedetect  ~/densebox_640_360/densebox_640_360.xmodel file_list.txt results.txt

# Run the performance test - Input is a file with a list of images,  -t: Number of threads, -s: Number of seconds to run
$ ./test_performance_facedetect  ~/densebox_640_360/densebox_640_360.xmodel file_list.txt -t 4 -s 10 

There are three ways to specify the model

  1. Place the model directory in /usr/share/vitis_ai_libray/models/ and use the model name (ex: densebox_640_360)

  2. Place the model directory in the current directory and use the model name

  3. Directly reference the .xmodel file (shown above)

For more information about the specific test applications supported for each sample, please refer to the readme in the demo/Vitis-AI-Library/samples/<sample app> directory

For more information about running the sample apps, please refer to

Sample Images and Videos

Vitis AI provides a test image archive that can be download to the target and used to run the tests above. To download the sample image package, and extract them to the samples directory in your home directory, use the following commands: 

wget -O ~/vitis_ai_library_r1.3.1_images.tar.gz
tar -xzf vitis_ai_library_r1.3.1_images.tar.gz

To use these images and file lists, change into the subdirectory of the test you want to run, and execute the test app from there.  For example, to use the facedetect sample images, you should run your test from the ~/samples/facedetect directory.