This page provides instructions for building the Vitis-AI Library (v1.3.2) sample applications from source on a ZCU10x evaluation board or KV260 kit running Certified Ubuntu 20.04 LTS for Xilinx Devices.
This flow has been tested with the Ubuntu 22.04 image on the ZCU102 using Vitis AI 2.5. Changes are noted in the steps below.
All of the source code is available on the Xilinx GitHub. After you clone the Vitis AI repository, switch to the v1.3.2 tag. Make sure you’re in your home directory before cloning.
$ cd ~
$ git clone https://github.com/Xilinx/Vitis-AI.git
$ cd Vitis-AI
$ git checkout tags/v1.3.2
For Vitis AI 2.5, use tags/v2.5
Build the Application Code
Depending on which sample application you want to build, switch to the sample application directory and build it using the provided script build.sh.
In releases of the repository prior to version 1.4, the paths for opencv4 in the build.sh script are incorrect. This will be fixed in Vitis AI 1.4. For sample applications other than facedetect, be sure to update the build.sh script to include the necessary opencv4 include path before building. This can be done quickly with the following command:
$ sed -i 's/-std=c++17/-std=c++17 -I\/usr\/include\/opencv4/g' build.sh
For example, in order to build the facedetect sample application:
$ cd demo/Vitis-AI-Library/samples/facedetect
For Vitis AI 2.5, the build directory is examples/Vitis-AI-Library/samples/facedetect
Download a Model
The sample applications require a compatible machine learning model in order to run. Refer to the readme file in the application directory to determine which models are compatible with each sample application. The models are available for download at http://xilinx.com as part of the Xilinx Model Zoo. To find download link for the model you need, use the table below:
Compatible with the OOB bitstreams provided with the Certified Ubuntu Images
For Vitis AI 2.5, the model filename is: densebox_640_360-zcu102_zcu104_kv260-r2.5.0.tar.gz
Run the Sample Application
The sample applications can take input from either a jpeg image or a USB Camera. HDMI input is not supported by the sample apps.
For help determining which /dev/videoX interface to use for the USB tests, you can use the following command:
# Use a jpeg image as input (ex: image.jpeg, result will be in image_result.jpeg)
$ ./test_jpeg_facedetect ~/densebox_640_360/densebox_640_360.xmodel image.jpeg
# Use USB camera as input (2 is the /dev/videoX index that the camera is detected at)
$ ./test_video_facedetect ~/densebox_640_360/densebox_640_360.xmodel 2
# Use video file as input
./test_video_facedetect densebox_640_360/densebox_640_360.xmodel video_input.webm -t 8
# Run the accuracy test - Input is a file with a list of images, results are in results.txt
$ ./test_accuracy_facedetect ~/densebox_640_360/densebox_640_360.xmodel file_list.txt results.txt
# Run the performance test - Input is a file with a list of images, -t: Number of threads, -s: Number of seconds to run
$ ./test_performance_facedetect ~/densebox_640_360/densebox_640_360.xmodel file_list.txt -t 4 -s 10
There are three ways to specify the model
Place the model directory in /usr/share/vitis_ai_libray/models/ and use the model name (ex: densebox_640_360)
Place the model directory in the current directory and use the model name
Directly reference the .xmodel file (shown above)
For more information about the specific test applications supported for each sample, please refer to the readme in the demo/Vitis-AI-Library/samples/<sample app> directory
Vitis AI provides a test image archive that can be download to the target and used to run the tests above. To download the sample image package, and extract them to the samples directory in your home directory, use the following commands:
wget https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_library_r1.3.1_images.tar.gz -O ~/vitis_ai_library_r1.3.1_images.tar.gz
tar -xzf vitis_ai_library_r1.3.1_images.tar.gz
To use these images and file lists, change into the subdirectory of the test you want to run, and execute the test app from there. For example, to use the facedetect sample images, you should run your test from the ~/samples/facedetect directory.