Vitis-AI 3.0 DPU TRD for QNX 7.1

This page describes the process for installing, building, and testing support for the Vitis-AI 3.0 Deep Learning Processing Unit (DPU) under the BlackBerry QNX RTOS. Both the Vitis and Vivado flows are described below.

The content of this wiki page is not supported by Xilinx Technical Support. Please contact your AMD or QNX FAE for more information.

Table of Contents

Introduction

Blackberry QNX provides support for the Zynq UltraScale+ DPU when using their ZCU102 BSP for the QNX Neutrino RTOS. This support is enabled by way of updates to the “QNX® SDP 7.1 Xilinx Vitis-AI” package as referenced in the Required QNX RTOS Software Packages section below. Using the instructions below, support for other boards and customs designs can be added as well.

The process for building and evaluating the packages to exercise the DPU and associated examples of C++ and Python based samples with static and video input images is described in the sections below.

The instructions below are being carried out on a Windows host computer. If you are using a Linux host, your steps might look a bit different.

Required QNX Software

It is assumed that the user has installed the following tools from QNX:

For general information about the ZCU102 BSP that is used for this project, you should refer to the ZCU102 BSP User Guide.

QNX Software Center is used to install all of the QNX software and manage your QNX license keys.

Required QNX RTOS Software Packages

To build the QNX reference design for the ZCU102, the following runtime software packages are required from Blackberry QNX.

The packages are available through the QNX Software Center application and are available upon request for current QNX customers. Please contact your QNX or Xilinx FAE for assistance.

Make sure you also update your QNX SDP 7.1 installation to the latest version

Note that some of the packages are available as “Experimental” or are grouped under “Partner” software - make sure to select visibility of both Experimental and Partner packages in the QNX Software Center application to see these packages.

Package Type

Package Name

Build ID (min)

Version

Package Type

Package Name

Build ID (min)

Version

QNX Software Development Platform 7.1

QNX® SDP 7.1 Neutrino RTOS and QNX Momentics Tool Suite

472

 

BSP

QNX® SDP 7.1 BSP for Xilinx Zynq UltraScale+ MPSoC ZCU102

16

 

AMD-Xilinx

QNX® SDP 7.1 Xilinx Vitis-AI

807

Vitis-AI 3.0

AMD-Xilinx

QNX® SDP 7.1 Xilinx XRT

807

XRT 2.14.354 (2022.2)

Graphics

QNX SDP 7.1 Screen Board Support ZCU10x

535

 

OSR

Numpy

30

Request latest from BlackBerry

OpenCV

40

Request latest from BlackBerry

OS utilities

Python 3.1

61

 

Once the packages are installed, the installed tab in Software Center should include all of the packages shown below, at a minimum. In this example, a new QNX 7.1 SDP installation named “qnx710_vai3.0_2” was used:

Xilinx Vitis-AI Design Files

The reference designs used for validation can be found here for both the Vitis and Vivado flows for edge devices:

For the Vitis flow, this tutorial assumes you will use a pre-built SD image linked below, so you DO NOT need to download the reference design sources unless you want to build and customize the design from source. In the case that you want to build the design, you should download the reference design for the IP Named DPUCZDX8G for the “MPSoC / Kria K26” platforms.

For the Vivado flow, there is not a pre-built image available, so you DO need to download the reference design to build the project.

See the Vitis Flow and Vivado Flow sections below for more details.

Vitis Flow

When using the Vitis flow, a Vitis acceleration platform is used (2022.2 ZCU102 Vitis Base Platform in this case) , and the Vitis linker is used to insert the DPU IP as a “compute unit” into the Vitis platform shell.

For this flow, there is a pre-built SD card image available here:

This can be used to jump-start the development as you do not need to rebuild a BOOT.BIN. You can simply write this image to an SD card and then update the FAT partition after you have completed the QNX BSP build steps below. Alternatively, you can mount the FAT partition of the WIC image, extract the BOOT.BIN and dpu.xclbin, and copy those to the FAT partition of your SD card.

Example using BalenaEtcher to burn the SD image to a card:

After burning the SD image set the card aside for use later below. After we build QNX, we will overlay the SD image with some QNX specific files.

Vivado Flow

When using the Vivado flow , a traditional Vivado design is used, and the DPU IP is added directly to the Vivado block design. There is no Vitis platform required.

For this flow , you will need to generate the required BOOT.BIN as a pre-built image is not available. To use the pre-built .xsa without running the Vivado tools, you can simply build the PetaLinux project as described in the Vivado flow documentation:

Limitations

When using the Vivado Flow, there are two XRT libraries that are still required (see steps below) due to fact that the VART library is common between the two flows. Therefore, the XRT package mentioned above must be installed for either use-case.

Building the Vitis AI Enabled QNX BSP

The QNX BSP Project can be built from the command line or using the QNX Momentics IDE. The Momentics flow running under Windows will be described below. See the ZCU102 BSP User Guide for more information on command line flows.

Create the ZCU102 BSP Project

  1. Launch Momentics and create a new workspace.

  2. Create a new project based on the ZCU102 BSP 

    • File->Import



    • Choose QNX->QNX Source Package and BSP. Click Next

    • Browse for the ZCU102 BSP ZIP Archive

      • <QNX SDP install path>\bsp\BSP_xilinx-zynq-zcu102_br-710_be-710_SVN964557_JBN16.zip

      • Click Open, then Next Twice

      • Give the project a suitable name

      • Click Finish

    • You should now have a new Project open in the Momentics IDE similar to the following:

Modify the Build File

The QNX .build file is used to generate the QNX Initial File System (QNX-IFS) and specify startup scripts. It must be modified to include the necessary components to use Vitis AI and XRT. Each individual section will be covered here, and a full example .build will be provided at the end of the section. For this example, we’ll use zcu102-graphics.build as a starting point. This file is found in the images sub-directory of the QNX BSP project.

In the images directory, using the Momentics IDE Project Explorer:

  • rename zcu102.build to zcu102.build.original (right click on the file name and select “rename”)

  • copy zcu102-graphics.build to zcu102.build

Open zcu102.build in a text editor (double click on it in the Project Explorer) and make the additions in the sections below.

Add Vitis-AI Support

The following steps are required for both Vitis and Vivado flows.

  1. Add VAI libraries
    Several libraries need to be installed in the root file system. Add the following lines to the build file after the .script section. If the Vitis-AI Library layer is not needed, the lines after # Vitis-AI Library below can be excluded to save space, or only certain models can be included.

    ########################################################################### ## Vitis-AI ########################################################################### # Utilities & Drivers /bin/show_dpu=show_dpu # Vitis-AI libs /usr/lib/libjson-c.so=libjson-c.so /usr/lib/libprotobuf.so=libprotobuf.so /usr/lib/libprotoc.so=libprotoc.so /usr/lib/libc++.so=libc++.so /usr/lib/libiconv.so=libiconv.so /usr/lib/libintl.so=libintl.so /usr/lib/libboost_system.so=libboost_system.so /usr/lib/libboost_atomic.so=libboost_atomic.so /usr/lib/libboost_filesystem.so=libboost_filesystem.so /usr/lib/libglog.so=libglog.so /usr/lib/libgflags.so=libgflags.so /lib/libcatalog.so=libcatalog.so /lib/libunilog.so=libunilog.so /lib/libtarget_factory.so=libtarget_factory.so /lib/libxir.so=libxir.so /lib/libvart.so=libvart.so /lib/libvart-dpu-runner.so=libvart-dpu-runner.so # Vitis-AI Library libs /lib/libvitis_ai_library-extra_ops.so=libvitis_ai_library-extra_ops.so /lib/libvitis_ai_library-model_config.so=libvitis_ai_library-model_config.so /lib/libvitis_ai_library-math.so=libvitis_ai_library-math.so /lib/libvitis_ai_library-runner_helper.so=libvitis_ai_library-runner_helper.so /lib/libvitis_ai_library-dpu_task.so=libvitis_ai_library-dpu_task.so /lib/libvitis_ai_library-cpu_task.so=libvitis_ai_library-cpu_task.so /lib/libvitis_ai_library-graph_runner.so=libvitis_ai_library-graph_runner.so /lib/libvitis_ai_library-xmodel_jit.so=libvitis_ai_library-xmodel_jit.so /lib/libvitis_ai_library-xmodel_image.so=libvitis_ai_library-xmodel_image.so /lib/libvitis_ai_library-xnnpp.so=libvitis_ai_library-xnnpp.so # Vitis-AI Library model-specific libraries /lib/libvitis_ai_library-classification.so=libvitis_ai_library-classification.so /lib/libvitis_ai_library-tfssd.so=libvitis_ai_library-tfssd.so /lib/libvitis_ai_library-facedetect.so=libvitis_ai_library-facedetect.so /lib/libvitis_ai_library-facefeature.so=libvitis_ai_library-facefeature.so /lib/libvitis_ai_library-lanedetect.so=libvitis_ai_library-lanedetect.so /lib/libvitis_ai_library-yolov2.so=libvitis_ai_library-yolov2.so /lib/libvitis_ai_library-yolov3.so=libvitis_ai_library-yolov3.so /lib/libvitis_ai_library-RGBDsegmentation.so=libvitis_ai_library-RGBDsegmentation.so /lib/libvitis_ai_library-facelandmark.so=libvitis_ai_library-facelandmark.so /lib/libvitis_ai_library-facequality5pt.so=libvitis_ai_library-facequality5pt.so /lib/libvitis_ai_library-ssd.so=libvitis_ai_library-ssd.so /lib/libvitis_ai_library-segmentation.so=libvitis_ai_library-segmentation.so /lib/libvitis_ai_library-covid19segmentation.so=libvitis_ai_library-covid19segmentation.so /lib/libvitis_ai_library-3Dsegmentation.so=libvitis_ai_library-3Dsegmentation.so /lib/libvitis_ai_library-refinedet.so=libvitis_ai_library-refinedet.so /lib/libvitis_ai_library-openpose.so=libvitis_ai_library-openpose.so /lib/libvitis_ai_library-hourglass.so=libvitis_ai_library-hourglass.so /lib/libvitis_ai_library-posedetect.so=libvitis_ai_library-posedetect.so /lib/libvitis_ai_library-reid.so=libvitis_ai_library-reid.so /lib/libvitis_ai_library-rcan.so=libvitis_ai_library-rcan.so /lib/libvitis_ai_library-reidtracker.so=libvitis_ai_library-reidtracker.so /lib/libvitis_ai_library-multitask.so=libvitis_ai_library-multitask.so /lib/libvitis_ai_library-multitaskv3.so=libvitis_ai_library-multitaskv3.so /lib/libvitis_ai_library-platedetect.so=libvitis_ai_library-platedetect.so /lib/libvitis_ai_library-platenum.so=libvitis_ai_library-platenum.so /lib/libvitis_ai_library-platerecog.so=libvitis_ai_library-platerecog.so /lib/libvitis_ai_library-carplaterecog.so=libvitis_ai_library-carplaterecog.so /lib/libvitis_ai_library-medicalsegmentation.so=libvitis_ai_library-medicalsegmentation.so /lib/libvitis_ai_library-medicaldetection.so=libvitis_ai_library-medicaldetection.so /lib/libvitis_ai_library-facerecog.so=libvitis_ai_library-facerecog.so /lib/libvitis_ai_library-facedetectrecog.so=libvitis_ai_library-facedetectrecog.so /lib/libvitis_ai_library-pointpillars.so=libvitis_ai_library-pointpillars.so /lib/libvitis_ai_library-pointpillars_nuscenes.so=libvitis_ai_library-pointpillars_nuscenes.so /lib/libvitis_ai_library-pointpainting.so=libvitis_ai_library-pointpainting.so /lib/libvitis_ai_library-centerpoint.so=libvitis_ai_library-centerpoint.so /lib/libvitis_ai_library-retinaface.so=libvitis_ai_library-retinaface.so /lib/libvitis_ai_library-mnistclassification.so=libvitis_ai_library-mnistclassification.so /lib/libvitis_ai_library-medicalsegcell.so=libvitis_ai_library-medicalsegcell.so /lib/libvitis_ai_library-bcc.so=libvitis_ai_library-bcc.so /lib/libvitis_ai_library-pmg.so=libvitis_ai_library-pmg.so /lib/libvitis_ai_library-general1.so=libvitis_ai_library-general1.so /lib/libvitis_ai_library-multi_runner.so=libvitis_ai_library-multi_runner.so /lib/libvitis_ai_library-arflow.so=libvitis_ai_library-arflow.so /lib/libvitis_ai_library-bevdet.so=libvitis_ai_library-bevdet.so /lib/libvitis_ai_library-c2d2_lite.so=libvitis_ai_library-c2d2_lite.so /lib/libvitis_ai_library-cflownet.so=libvitis_ai_library-cflownet.so /lib/libvitis_ai_library-clocs.so=libvitis_ai_library-clocs.so /lib/libvitis_ai_library-efficientdet_d2.so=libvitis_ai_library-efficientdet_d2.so /lib/libvitis_ai_library-fairmot.so=libvitis_ai_library-fairmot.so /lib/libvitis_ai_library-fusion_cnn.so=libvitis_ai_library-fusion_cnn.so /lib/libvitis_ai_library-monodepth2.so=libvitis_ai_library-monodepth2.so /lib/libvitis_ai_library-movenet.so=libvitis_ai_library-movenet.so /lib/libvitis_ai_library-ocr.so=libvitis_ai_library-ocr.so /lib/libvitis_ai_library-ofa_yolo.so=libvitis_ai_library-ofa_yolo.so /lib/libvitis_ai_library-pmrid.so=libvitis_ai_library-pmrid.so /lib/libvitis_ai_library-polypsegmentation.so=libvitis_ai_library-polypsegmentation.so /lib/libvitis_ai_library-solo.so=libvitis_ai_library-solo.so /lib/libvitis_ai_library-textmountain.so=libvitis_ai_library-textmountain.so /lib/libvitis_ai_library-tracker.so=libvitis_ai_library-tracker.so /lib/libvitis_ai_library-ultrafast.so=libvitis_ai_library-ultrafast.so /lib/libvitis_ai_library-vehicleclassification.so=libvitis_ai_library-vehicleclassification.so /lib/libvitis_ai_library-yolov6.so=libvitis_ai_library-yolov6.so /lib/libvitis_ai_library-yolovx.so=libvitis_ai_library-yolovx.so /lib/libxmodel_postprocessor_common.so=libxmodel_postprocessor_common.so /lib/libxmodel_postprocessor_classification.so=libxmodel_postprocessor_classification.so /lib/libxmodel_postprocessor_densebox.so=libxmodel_postprocessor_densebox.so /lib/libxmodel_postprocessor_plate_number.so=libxmodel_postprocessor_plate_number.so /lib/libxmodel_postprocessor_face_recognition.so=libxmodel_postprocessor_face_recognition.so /lib/libxmodel_postprocessor_face_quality.so=libxmodel_postprocessor_face_quality.so /lib/libxmodel_postprocessor_face_landmark.so=libxmodel_postprocessor_face_landmark.so /lib/libxmodel_postprocessor_open_pose.so=libxmodel_postprocessor_open_pose.so /lib/libxmodel_postprocessor_segmentation.so=libxmodel_postprocessor_segmentation.so /lib/libxmodel_postprocessor_ssd.so=libxmodel_postprocessor_ssd.so /lib/libxmodel_postprocessor_yolov3.so=libxmodel_postprocessor_yolov3.so /lib/libxmodel_preprocessor_common.so=libxmodel_preprocessor_common.so /lib/libxmodel_preprocessor_vgg.so=libxmodel_preprocessor_vgg.so /lib/libxmodel_preprocessor_efficientnet.so=libxmodel_preprocessor_efficientnet.so

     

  2. Add sample applications
    There are a few sample applications packaged with the Vitis-AI package. Add these lines to the build file after the Vitis-AI libraries.

    /usr/bin/resnet50=resnet50 /usr/bin/test_jpeg_facedetect=test_jpeg_facedetect /usr/bin/segmentation=segmentation
  3. Add Python support
    Add the following to include python support. This is optional and only required for exercising the VAI python APIs.

/usr/lib/libpython3.11.so=libpython3.11.so /usr/lib/libffi.so=libffi.so [search=${QNX_TARGET}/${PROCESSOR}/usr/lib/python3.11/site-packages]/usr/lib/python3.11/site-packages/xir.so=xir.so [search=${QNX_TARGET}/${PROCESSOR}/usr/lib/python3.11/site-packages]/usr/lib/python3.11/site-packages/vart.so=vart.so /usr/bin/dirname=dirname /usr/bin/python3=python3 /usr/bin/python3.11=python3.11
  1. Add OpenCV support
    Vitis-AI requires access to OpenCV, so add the following OpenCV libraries to the IFS.

     

Setup Vitis Flow

The following steps should be taken for the Vitis flow only.

  1. Add AXI_INTC support to the QNX startup process.
    The Vitis platform used to implement the Vitis Flow reference design (2022.2 ZCU102 Vitis Base Platform) includes an AXI_INTC interrupt controller in the programmable logic to aggregate up to 32 interrupts from different compute units, including the DPU. The output of the AXI_INTC connects to the GIC through the PL->PS interrupt bus. Therefore, we need to tell the QNX startup process to enable and configure the AXI_INTC.

    In the .bootstrap section, modify the startup-xzynq-zcu10x command to add an argument to configure the base address (0x80020000) and cascaded GIC interrupt (121) of AXI_INTC IP in the PL.

  2. Start the zocl driver at boot time.
    XRT relies on the zocl device driver (or QNX device resource manager) to manage communications and data transfer with the connected compute units, including the DPU. Therefore, we need to start up the driver and inform it what interrupts are assigned to it.

    In the .script section, AFTER /sbin/screen is run, and before the QSPI flash section, add the following command to start the zocl driver for XRT. The interrupts are design-specific and matches the configuration of the matches the ZCU102 base Vitis platform used in the DPU TRD which has 32 interrupts connected into the AXI_INTC IP.

     

  3. Add XRT libs/drivers/apps to the filesystem.
    Several different libraries, the zocl driver and the xbutil utility need to be installed in the root file system. Add the following lines to the build file after the .script section. We add them after the # Shared libraries section and before the ## Tracelog section:

  4. Add XLNX_VART_FIRMWARE environment variable
    Because dpu.xclbin is on the SD card, we need to set an environment variable so that VAI knows where to find it. Add the following line inside the section where /etc/profile is defined:

 

Setup Vivado Flow

The following steps should be taken for the Vivado flow only.

  1. Add XRT Minimal Libraries

Since the VART library is shared between the two flows, a minimal set of XRT libraries is required even for the Vivado flow. Add the following after the # Shared libraries section and before the ## Tracelog section:

  1. Add the DPU Driver

First, in the section where we added the Vitis-AI libraries, add the DPU driver:

Next, in the .script section AFTER /sbin/screen is run and before the QSPI flash section, add the following command to start the DPU driver for Vivado flow.

In this example the base address and interrupt information match that of the default VAI 3.0 DPU TRD for the Vivado flow:

  • DPU Base Address (-a)

    • 0x8F00_0000

  • Interrupts (-i)

    • In this design, there are four interrupts from the DPU IP - one for each DPU core, and one for the Softmax core. Interrupts are connected directly from the DPU IP to the PS GIC via the upper
      PL->PS Interrupt interface (ps_ps_irq0[3:0])

    • Source/Int #/GIC #: DPU0/0x59/121, DPU1/0x5a/122, DPU2/0x5b/123, SFM/0x5c/124

Add Extra Utilities

Next, add some useful utilities at the end of the .build file:

Mount the FAT partition

Finally, save zcu102.build.

Build the BSP

Preparing the Hardware

Creating the SD Card

From the BSP images directory, you need to copy the QNX-IFS and boot.scr files to a bootable FAT partition on the SD card. If you used the VAI 3.0 SD Image referenced above to create the SD card for the Vitis flow, you should rename the original boot.scr before copying in case you want to go back to a Linux boot.

  1. Copy QNX-IFS to FAT partition.

  2. Copy boot.scr to FAT partition.

  3. If not already there, copy the dpu.xclbin from the Vitis project to the FAT partition.

  4. create a directory called /etc on the FAT partition. This is needed so that VAI can have write access to /etc.

For more information regarding creating a bootable SD card without using the VAI 3.0 SD card image, please refer to the ZCU102 QNX BSP User Guide section called “Prepare a bootable SD card”.

This is what your SD card should look like after all the above steps:

Booting up the Board

  1. Insert the card in the ZCU102 and power on the board.

  2. As part of the default boot process, the boot script will mount the SD card at /.

The console for the ZCU102 should look like this after a successful boot:

At this point, your board is ready to run Vitis AI 3.0 applications.

Running the Test Applications

The VAI 3.0 package includes three pre-compiled test applications that demonstrate three types of machine learning models:

  • resnet50 for object classification

  • desnsebox for face detection

  • for semantic segmentation

These applications have been included in QNX-IFS during the build process. However, we need some other files to exercise the tests, including machine learning model files (.xmodel) and test images/videos. For simplicity, the instructions here assume that you will download any necessary files on your development machine and copy them to the SD card. The applications are provided so that a basic Vitis-AI 3.0 system validation can be done. The building of custom Vitis-AI applications in the QNX environment is outside of the scope of this document.

Downloading the Files

Power off the target, and take your SD card back to your development machine. Next, use a browser or other means to download the files listed below:

  1. On the FAT partition, create a folder called vai-test

  2. Extract resnet50-zcu102_zcu104_kv260-r3.0.0.tar.gz on your machine, and copy the resnet50 directory to /vai-test directory on the SD Card

  3. Copy words.txt to /vai-test/resnet50

  4. Extract vitis_ai_runtime_r3.0.0_image_video.tar.gz and copy the images directory to /vai-test

  1. Eject the SD card, and reinsert it into the ZCU102

Classification

The pre-built ResNet50 test application is built from sources found here: https://github.com/Xilinx/Vitis-AI/tree/3.0/examples/vai_runtime/resnet50

In the serial console, cd to the the resnet50 directory you created and run the resnet50 example. Provide the .xmodel you downloaded as an argument:

Input Image

Output Results

Your output should be similar to what is shown above. If so, Vitis AI is installed properly in your system.

Face Detection

Coming soon

Semantic Segmentation

Coming soon

Python Examples

Coming soon

 

Appendix

Vitis Flow Build File

The following is an example zcu102.build that incorporates all of the steps detailed in the sections above for the Vitis Flow. The original file is zcu102-graphics.build from the ZCU102 BSP, Build 16.

Vivado Flow Build File

The following is an example zcu102.build that incorporates all of the steps detailed in the sections above for the Vivado Flow. The original file is zcu102-graphics.build from the ZCU102 BSP, Build 16.

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy