/
AMD Yocto Native Board Support Packages

AMD Yocto Native Board Support Packages

This page is a How To/ User Guide to accompany the AMD Yocto Native BSP, allowing users to boot AMD Evaluation boards with Yocto based pre-built Images, and to re-build or extend those images.

Table of Contents

Introduction

What is a Yocto Native Board Support Package?

A Yocto Native Board Support Package (BSP) is a BSP for AMD Evaluation boards, intended as a starting point for development work for custom software or hardware designs. They are built in a Yocto environment using System Device Tree (SDT) based development flows. All BSPs include:

Intent of this page

The intent of this page is to allow users to:

  • Boot AMD Evaluation boards to Linux and configure the Programable Logic using the provided pre-built system images

  • Re-build the pre-built system image including re-generating all hardware and software artifacts

  • Understand the basic steps to extend the pre-built system images (Programable Logic content or Embedded Software) and create a new system image

  • Locate additional resources to enable further exploration or re-targeting of these flows to custom boards as part of a Yocto based pathway to release of a production system.

The Vivado tool set is required to re-build or edit the hardware design

Release Information

The 2024.2 Developer Preview Early Access release of Yocto Native BSPs makes use of the Yocto based PetaLinux DISTRO and Package Feed and ‘PetaLinux’ is displayed on the terminal prompt and other areas.   This is scheduled to be updated in the next release."

Content and Download Links

Content and Download Links

Release

 

Release Notes

Release

 

Release Notes

2024.2

Initial Developer Preview release with support for the following AMD Evaluation boards:

  • VEK280

  • ZCU104

Download links for pre-built system images and build artifacts by board:

VEK280

ZCU104

 

Top level Download locations, structure and naming scheme

Yocto Native BSP Release Notes - 2024.2 Update2

Yocto Project related Release Notes - Xilinx Wiki - Confluence

 

 

 

System Boot Architecture and Default Image configuration

System Boot Architecture

AMD Yocto native BSP Images use a multi-stage boot process using Segmented Configuration on Versal devices. Boot starts with the bootROM on the device and then passes on to either the First Stage Boot Loader (FSBL) or Platform Management Controller (PMC) and Platform Management Controller executable (PLM) dependent on the hardware architecture ( Zynq™ UltraScale+™ MPSoCs or Versal™).  Once this step is complete, they hand off to Arm Trusted Firmware, and then U-boot which handles loading Linux.  The Programable Logic (PL) firmware is loaded from the Linux user space as a Device Tree Overlay (.dtbo) using the fpgautil command line utility.   

For details on the boot and configuration process, please see the following documentation:

Example Vivado Hardware design

The AMD Yocto Native BSPs are targeted to a specific board and a specific SoC (PS and NOC) configuration. The configurations are contained within and automated at this repository.  For detail on the specific configuration, re-generate and view the Hardware design in Vivado following the steps here.

To replicate the steps to manually create the design or modify the hardware configuration, you will need to create a hardware project in Vivado, export it to .xsa  and then convert that .xsa  into a system device tree directory. These steps are covered in the relevant sections below.

The "system device tree" is a text file or files that that describe the hardware available to the software. The "system device tree directory" includes the system device tree and additional files necessary for a successful device boot, including the CDO files needed for NOC configuration. 

Future content to follow, see AMD Yocto Native Board Support Packages | Future content to follow:

Default Embedded Software Image Configuration

The configuration of the Embedded Software Image, Linux Kernel, RootFS, boot loaders and other items are contained in the Yocto Machine configurations and related configuration and setting files. Please see the relevant recipes and configuration files mentioned in the build sections for more information.

Future content to follow, see AMD Yocto Native Board Support Packages | Future content to follow:

How to boot a board using the pre-built Images : SD Mode

QEMU: QEMU Binaries are not generated by default but are supported, see the following section :AMD Yocto Native Board Support Packages | How to build and boot using QEMU

1. Writing the Image to the boot media

The .wic.xz pre-built images (SeeAMD Yocto Native Board Support Packages | Release Information) can be written to an SD card using either an application such as balenaEtcher or a command line utility such as 'dd'.   

Depending on the tool used, the .xz archive may need to be un-archived before it can be written.

BaleneEtcher : Follow the on-screen prompts to select the input file and output path.

image-20241231-121120.png

dd: Standard dd commands can be used (see dd command line help). If the target system is already booted to Linux, the image could be transfers to the Target system via Ethernet, and 'dd' could be run on the target. Below is example output using dd running on the target: Note: in the example below xzcat is being used to un-archive the .wic.gz prior to being passed to dd for writing to the boot media

user@enho:~$ xzcat petalinux-image-minimal-versal-vek280-sdt-seg-20241226190706.wic.gz | sudo dd of=/dev/mmcblk0 bs=1M status=progress [sudo] password for user: 728449024 bytes (728 MB, 695 MiB) copied, 4 s, 182 MB/s 0+109379 records in 0+109379 records out 898891776 bytes (899 MB, 857 MiB) copied, 66.8311 s, 13.5 MB/s For zcu104 petalinux-image-minimal-zynqmp-zcu104-sdt-full-20241226193813.wic.gz

2. Booting the Board to Linux

Once the image has been written to the boot media, boot the board using the boot settings for the boot source as documenting the relevant board user guide.

3. Loading the pre-built Programable Logic (PL) firmware

The pre-built images contain example PL firmware which can be loaded / configured from Linux user space as a Device Tree Overlay using the fpgautil Linux command line utility.   Once Linux has booted to user space the commands below can be used to configure the PL fabric.

# fpgautil -b /lib/firmware/xilinx/<.bin/.pdi> -o /lib/firmware/xilinx/<Device Tree Overlay>.dtbo 

Note the location of the PL firmware binaries (.bin /.pdi) and device tree overlay (.dtbo) files

Example Commands and Logs ZCU104

zynqmp-zcu104-sdt-full:/home/petalinux# fpgautil -b /lib/firmware/xilinx/zcu104-pl-vcu-fw/zcu104-pl-vcu-fw.bin -o /lib/firmware/xilinx/zcu104-pl-vcu-fwzcu104-pl-vcu-fw.dtbo 

root@zynqmp-zcu104-sdt-full:~# cd /lib/firmware/xilinx/zcu104-pl-vcu-fw/
root@zynqmp-zcu104-sdt-full:/lib/firmware/xilinx/zcu104-pl-vcu-fw#
root@zynqmp-zcu104-sdt-full:/lib/firmware/xilinx/zcu104-pl-vcu-fw#
root@zynqmp-zcu104-sdt-full:/lib/firmware/xilinx/zcu104-pl-vcu-fw#
fpgautil -b zcu104-pl-vcu-fw.bin -o zcu104-pl-vcu-fw.dtbo
mkdir: can't create directory '/configfs': File exists
[ 176.938059] fpga_manager fpga0: writing zcu104-pl-vcu-fw.bin to Xilinx
ZynqMP FPGA Manager
[ 177.093535] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /fpga-region/firmware-name
[ 177.104949] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/clocking3
[ 177.114891] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/clocking2

 

[ 177.124820] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/clocking1
[ 177.134752] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/clocking0
[ 177.144679] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/afi0
[ 177.154187] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/misc_clk_3
[ 177.164209] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/misc_clk_2
[ 177.174227] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/misc_clk_1
[ 177.184241] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/misc_clk_0
[ 177.194258] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/axi_intc_0
[ 177.204275] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/v_frmbuf_rd_0
[ 177.214553] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/v_frmbuf_wr_0
[ 177.224829] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/vcu_0
[ 177.234412] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/encoder
[ 177.244169] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/decoder
[ 177.253925] OF: overlay: WARNING: memory leak will occur if overlay
removed, property: /__symbols__/vcu_ddr4_controller_0
[ 177.583473] irq-xilinx: /axi/interrupt-controller@80000000: num_irq=1,
edge=0x1
[ 177.592049] xilinx-frmbuf a0000000.v_frmbuf_rd: Xilinx AXI frmbuf
DMA_MEM_TO_DEV
[ 177.599704] xilinx-frmbuf a0000000.v_frmbuf_rd: Xilinx AXI FrameBuffer
Engine Driver Probed!!
[ 177.619242] xilinx-frmbuf a0010000.v_frmbuf_wr: Xilinx AXI frmbuf
DMA_DEV_TO_MEM
[ 177.626860] xilinx-frmbuf a0010000.v_frmbuf_wr: Xilinx AXI FrameBuffer
Engine Driver Probed!!
[ 177.636335] platform a0140000.vcu: Fixed dependency cycle(s) with
/axi/vcu@a0100000
[ 177.645180] xilinx-vcu a0140000.vcu: could not find xlnx,vcu-settings:
trying direct register access
Time taken to load BIN is 728.000000 Milli Seconds
BIN FILE loa[ 177.713111] al5e a0100000.al5e: l2 prefetch size:12451840
(bits), l2 color bitdepth:10
ded through FPGA manager successfully
root@zynqmp-zcu104-sdt-full:/lib/firmware/xilinx/zcu104-pl-vcu-fw# [
177.730453] al5d a0120000.al5d: l2 prefetch size:12451840 (bits), l2 color
bitdepth:10
root@zynqmp-zcu104-sdt-full:/lib/firmware/xilinx/zcu104-pl-vcu-fw#

 

--------------------------------------------------------------------------
---------------------
root@zynqmp-zcu104-sdt-full:/lib/firmware/xilinx/zcu104-pl-vcu-fw# lsmod
Module Size Used by
al5d 16384 0
al5e 16384 0
allegro 36864 2 al5e,al5d
xlnx_vcu 16384 5 allegro
mali 225280 5
cfg80211 368640 0
uio_pdrv_genirq 12288 0
openvswitch 139264 0
nsh 12288 1 openvswitch
nf_nat 36864 1 openvswitch
dmaproxy 12288 0
root@zynqmp-zcu104-sdt-full:/lib/firmware/xilinx/zcu104-pl-vcu-fw# [
196.141055] systemd-journald[223]: Sent WATCHDOG=1 notification.
Device hw_server command: "hw_server" (pid 159415) exited with status 0.
root@zynqmp-zcu104-sdt-full:/lib/firmware/xilinx/zcu104-pl-vcu-fw#

Example Commands and Logs VEK280

versal-vek280-sdt-seg:/home/petalinux# fpgautil -b /lib/firmware/xilinx/vek280-pl-aie-vdu-fw/vek280-pl-aie-vdu-fw.pdi -o /lib/firmware/xilinx/vek280-pl-aie-vdu-fw/vek280-pl-aie-vdu-fw.dtbo

versal-vek280-sdt-seg:/home/petalinux#
versal-vek280-sdt-seg:/home/petalinux# ls /usr/lib/libxa*
/usr/lib/libxaiengine.so.3 /usr/lib/libxaiengine.so.3.6
versal-vek280-sdt-seg:/home/petalinux# fpgautil -b /lib/firmware/xilinx/vek280-pl-aie-vdu-fw/vek280-pl-aie-vdu-fw.pdi -o /lib/firmware/xilinx/vek280-pl-aie-vdu-fw/vek280-pl-aie-vdu-fw.dtbo
[ 114.356234] fpga_manager fpga0: writing vek280-pl-aie-vdu-fw.pdi to Xilinx Versal FPGA Manager
[124799.404]Loading PDI from DDR
[124799.484]Monolithic/Master Device
[124802.953]3.548 ms: PDI initialization time
[124806.794]+++Loading Image#: 0x0, Name: pl_cfi, Id: 0x18700001
[124812.469]---Loading Partition#: 0x0, Id: 0x103
[124816.978] 0.124 ms for Partition#: 0x0, Size: 16640 Bytes
[124822.184]---Loading Partition#: 0x1, Id: 0x105
[124826.782] 0.213 ms for Partition#: 0x1, Size: 3872 Bytes
[124831.814]---Loading Partition#: 0x2, Id: 0x205
[124837.893] 1.694 ms for Partition#: 0x2, Size: 30640 Bytes
[124841.530]---Loading Partition#: 0x3, Id: 0x203
[124845.957] 0.042 ms for Partition#: 0x3, Size: 928 Bytes
[124851.074]---Loading Partition#: 0x4, Id: 0x303
[124871.843] 16.383 ms for Partition#: 0x4, Size: 1134816 Bytes
[124874.591]---Loading Partition#: 0x5, Id: 0x305
[124881.273] 2.296 ms for Partition#: 0x5, Size: 138080 Bytes
[124884.395]---Loading Partition#: 0x6, Id: 0x403
[124888.907] 0.126 ms for Partition#: 0x6, Size: 146512 Bytes
[124894.199]---Loading Partition#: 0x7, Id: 0x405
[124898.809] 0.225 ms for Partition#: 0x7, Size: 3904 Bytes
[124903.849]Subsystem PDI Load: Done
[ 114.475703] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /fpga-region/firmware-name
[ 114.488827] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/misc_clk_1
[ 114.498850] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/misc_clk_0
[ 114.508865] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/axi_intc_parent
[ 114.519323] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/ai_engine_0
[ 114.529431] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/aie_aperture_0
[ 114.539788] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/axi_gpio_0
[ 114.549793] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/dummy_slave_0
[ 114.560057] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/vdu_0
[ 114.569630] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/aie_core_ref_clk_0
[ 114.580331] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/al5d0
[ 114.589902] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/al5d1
[ 114.650554] irq-xilinx: /axi/interrupt-controller@a4820000: num_irq=32, edge=0xffffffff
[ 114.659170] xilinx-ai-engine 20000000000.ai_engine: Failed to get device clock.
[ 114.670980] aie aieaperture_0_38: AI engine aperture aieaperture_0_38, id 0x18800000, cols(0, 38) aie_tile_rows(3, 8) memory_tile_rows(1, 2) is probed successfully.
[ 114.685723] xilinx-ai-engine 20000000000.ai_engine: Xilinx AI Engine device 20000000000.ai_engine probed. Device generation: 2. Clock frequency: 1250000000Hz.
Time taken to load BIN is 317.000000 Milli Seconds
BIN FILE loaded through FPGA manager successfully
versal-vek280-sdt-seg:/home/petalinux# [ 114.728689] al5d_codec a4020000.al5d: No parent vdu node found!
[ 114.749363] al5d_codec a4020000.al5d: al5d probe successful
[ 114.755180] al5d_codec a4120000.al5d: No parent vdu node found!
[ 114.772276] al5d_codec a4120000.al5d: al5d probe successful

versal-vek280-sdt-seg:/home/petalinux#
versal-vek280-sdt-seg:/home/petalinux#
versal-vek280-sdt-seg:/home/petalinux# aie-matrix-multiplication /usr/bin/aie-matrix-multiplication.xclbin
Initializing ADF API...
[INFO] AIE GMIO Matrix Multiplication
[INFO] Matrix size(int32): 608x608
[ 129.190080] zocl-drm axi:zyxclmm_drm: zocl_create_client: created KDS client for pid(494), ret: 0
[ 129.199002] zocl-drm axi:zyxclmm_drm: zocl_destroy_client: client exits pid(494)
[ 129.216237] zocl-drm axi:zyxclmm_drm: zocl_create_client: created KDS client for pid(494), ret: 0
[ 129.644664] [drm] Found free Slot-1 is selected for xclbin
[ 129.650278] [drm] Allocated slot 1 to load xclbin in device
[ 129.650278] .
[ 129.650289] [drm] found kind 29(AIE_RESOURCES)
[139954.144]Loading PDI from DDR
[139954.225]Monolithic/Master Device
[139957.687]3.542 ms: PDI initialization time
[139961.533]+++Loading Image#: 0x0, Name: aie_image, Id: 0x18800000
[139967.466]---Loading Partition#: 0x0, Id: 0x0
[139996.876] 25.197 ms for Partition#: 0x0, Size: 10753312 Bytes
[139999.728]Subsystem PDI Load: Done
[ 129.657529] [drm] found kind 18(PDI)
[ 129.719339] [drm] FPGA Manager load DONE
[ 129.728472] [drm] skip kind 8(IP_LAYOUT) return code: -22
[ 129.732416] [drm] skip kind 9(DEBUG_IP_LAYOUT) return code: -22
[ 129.737816] [drm] found kind 25(AIE_METADATA)
[ 129.743762] [drm] skip kind 7(CONNECTIVITY) return code: -22
[ 129.748117] [drm] skip kind 6(MEM_TOPOLOGY) return code: -22
[ 129.761674] [drm] AIE create successfully finished.
XAIEFAL: INFO: Resource group Avail is created.
XAIEFAL: INFO: Resource group Static is created.
XAIEFAL: INFO: Resource group Generic is created.
[INFO] XCLBIN download complete
[ 129.767363] [drm] zocl_xclbin_read_axlf 44985203-5fd4-7178-e59a-eb7d48993b30 ret: 0
[ 129.931678] [drm] Allocating BO from CMA for invalid or unused memory index[0]
[ 129.951180] [drm] Allocating BO from CMA for invalid or unused memory index[0]
[ 129.958925] [drm] Allocating BO from CMA for invalid or unused memory index[0]
[INFO] AIE cores are done executing
[INFO] Running sanity check
[INFO] XGeMM Success!
[ 135.405477] [drm] AIE Reset successfully finished.
[ 135.405563] zocl-drm axi:zyxclmm_drm: zocl_destroy_client: client exits pid(494)
versal-vek280-sdt-seg:/home/petalinux#
versal-vek280-sdt-seg:/home/petalinux#
versal-vek280-sdt-seg:/home/petalinux#
versal-vek280-sdt-seg:/home/petalinux#
versal-vek280-sdt-seg:/home/petalinux#

How to re-build the embedded software Images using pre-built hardware artifacts

The default build flow (steps below) uses pre-built artifacts from the Vivado Hardware Design flow and System Device Tree generation flows. Later sections cover detail on re-generating these artifacts.

1. Setting up the Yocto Environment to build the embedded software images

AMD provides a repo  tool manifest file for its Yocto Native BSP build environment to help ensure all layers needed for build are available and in sync. To recreate or re-build the BSP's, or for related projects, this manifest provides a recommended starting point. Refer to the AMD Xilinx Yocto wiki page for release versions, features, known issue and limitations.

Set up the Yocto environment using the repo tool:

$ repo init -u https://github.com/Xilinx/yocto-manifests.git -b rel-v2024.2 $ repo sync $ source setupsdk

2. Generating the Yocto Machine

The gen-machineconf tool provided by AMD is used to convert AMD style hardware description information ('System Device-tree Directory' created by the SDTGen Tool) into a Yocto style machine configuration.

  • gen-machineconfis a Yocto environment tool included in meta-xilinx, and is separately downloadable.

  • gen-machineconf can be re-run to update a machine configuration if the System Device-tree Directory has been updated.

  • A pre-built 'System Device-tree Directory' is provided as part of the Yocto Native BSP for a board demonstrating the possible separation between the hardware and software build processes.

 

Use gen-machineconf to create a machine using a downloaded pre-built 'System Device-tree Directory'

$ gen-machineconf parse-sdt --hw-description <System Device-tree Directory> -c conf -l conf/local.conf --machine-name <Name for your machine/board> #Example for VEK280 using pre-built System Device-tree Directory, see release links for download path $ gen-machineconf parse-sdt --hw-description <path to download directory>/versal-vek280-sdt-seg_2024.2_1106_1/ -c conf -l conf/local.conf --machine-name versal-vek280-sdt-seg

See the readme for gen-machineconf for more information (Ensure you select the appropriate release branch - AMD does not use main as a development branch) :  GitHub - Xilinx/gen-machine-conf: Parse hardware design information to generate project specific machine configuration

  • Users should not modify the machine layer created by gen-machineconf, instead they should create modifications in local.conf or in their own layers. gen-machineconf can overwrite any changes made in the layers it manages.

  • gen-machineconf copies configuration files, not binaries. Items such as FSBL or PLM that are included in binary form from Vivado will be re-created with the build tools included in Yocto replacing those built by Vivado

  • The SDTGen created 'System Device-tree Directory' can also be re-generated from the provided Vivado Hardware Design for a BSP by following the steps in AMD Yocto Native Board Support Packages | How to regenerate the Vivado™ Hardware Design, .xsa and System Device tree Dire...

    • The 'System Device-tree Directory' would also need to be re-generated if there were changes made to the to the Vivado Hardware Design.

 

3. Building the image

Build the image using standard Yocto commands:

$ MACHINE=<Name for your machine/board> bitbake petalinux-image-minimal #Example for VEK280 $ MACHINE=versal-vek280-sdt-seg bitbake petalinux-image-minimal

The generated SD images <yocto build directory>/tmp/deploy/images/<machine name specified on gen-machineconf command>/ can now be written to boot media and the board booted following AMD Yocto Native Board Support Packages | How to boot a board using the pre built Images : SD Mode

The 2024.2 Developer Preview Early Access release of Yocto Native BSPs makes use of the Yocto based PetaLinux DISTRO and Package Feed, and ‘PetaLinux’ is displayed on the terminal prompt and other areas.   This is scheduled to be updated in the next release."

How to regenerate the Vivado Hardware Design, .xsa and System Device-tree Directory

The Vivado tools are required to re-build or modify the hardware design

AMD Yocto native BSPs are targeted to a specific board and use a multi-stage boot process that uses Segmented Configuration on Versal devices. This allows PL load to be deferred until Linux user mode, and for a different PL.pdi to be used with a common base.pdi for a specific SoC (PS / NOC) configuration. See AMD Yocto Native Board Support Packages | System Boot Architecture

The AMD Yocto native BSPs specific SoC (PS and NOC) configurations are defined within a board specific Vivado Hardware design, and are available and automated in this repository

  • There is a base project for each board

  • There might be multiple PL examples available, all built on the same common base.

 

Recommended Starting point: Re-Create the pre-built Vivado hardware project and .xsa (with no modifications).

Covered in later sections

This guide does not cover AI engine images, how to generate them, or how to add them to your final image, that will be added at a later date. 

1. Recreating the pre-built Vivado hardware project and .xsa

The Vivado hardware design and .xsa for supported evaluation boards can be re-created and built using the AMD Tcl repo https://github.com/Xilinx/amd-yocto-hw-platforms

Follow the instructions in the amd-yocto-hw-platforms readme to recreate and compile the project https://github.com/Xilinx/amd-yocto-hw-platforms/blob/xlnx_rel_v2024.2/README.md

Example commands for generating the VEK280 Vivado Hardware design (following the steps in the linked readme above)

#Clone the amd-yocto-hw-platforms to your local machine $ git clone --recursive https://github.com/Xilinx/amd-yocto-hw-platforms.git #Setup the Vivado Tools from there instalation directory $ source <vivado-install-path>/settings64.sh #Move to the directory for the VEK280 Board, and run the build by calling 'make all' $ cd amd-yocto-hw-platforms/eval_board_base/vek280_base $ make all

2. Generating the System Device-tree directory from the Vivado .xsa

AMD uses a two phase approach to hand over relevant files and information between the Hardware and Software development environments, and to create the Yocto Machine Configuration used in the embedded software image build.

  1. The System Device Tree Generator SDTGen tool uses the Vivado Design output (.xsa) to generate a ‘System Device-tree Directory’ containing System Device Tree files (SDT), other firmware and configuration files extracted from the .xsa.

  2. The gen-machineconf tool is then used with the System Device-tree Directory to create a Yocto Machine Configuration aligning industry standard approaches.

The "system device tree file(s)" is a text file or files that describe the hardware available to the software. The "system Device-tree Directory" includes the system device tree and additional files necessary for a successful device boot, including the CDO files needed for NOC configuration. 

These processes use AMD provided Yocto "meta-" layers to carry out actions including the following:

  • Re-creating the bootloader image binaries: The bootloaders generated by Vivado are effectively discarded.

  • Copying the FPGA device image binaries (.bin,.pdi) into the correct locations so that they are available to the bootgen tool for inclusion in the new PDI file created by the Yocto Image build.

  • Converting the system level hardware description (system device tree .sdt) into a device tree(s) appropriate for the OS being targeted (in this case Linux). This process consists of removing hardware that is not accessible to the targeted OS, as well as converting the device tree from a generic form into the form needed by the OS (this might include renaming, removing, or converting parameters for each device) 

Generate System Device-tree directory from the XSA

If you have generated your design from https://github.com/Xilinx/amd-yocto-hw-platforms using the make command, the .xsa and the System Device-tree directory will have been generated and you can skip this step.

To manually generate the System Device-tree directory, or if you made changes after Design Creation follow the steps below.

$ source vivado/settings64.sh $ xsct xsct% sdtgen set_dt_param -xsa <your xsa from vivado>.xsa -dir sdt_outdir -board_dts <board dts file> xsct% sdtgen generate_sdt xsct% exit

Complete instructions for SDTGen can be found in https://github.com/Xilinx/system-device-tree-xlnx

OPEN : Where are the SDT and PDI references in the Yocto build?

Next Steps

The System Device-Tree directory can be used with gen-machineconf to create or update the Yocto Machine Configuration, and then to generate the embedded software image. See https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#2.-Generating-the-Yocto-Machine for details.

 

Modifying PS / NOC Configuration and PL content : Full Custom flow

The Native Yocto Embedded Software images for AMD Evaluation boards are dependent on the PS Configuration and NoC settings in the Vivado Hardware Project. For a full custom flow where the PS configuration and NOC settings might change, all items will need to be re-generated as pre-built artifacts might not be compatible.

Additional manual steps might also be required to update the System Device Tree (recommended) or generated Device Trees to reflect IP added to the PL or available on the board.

The flows documented on this page can be used as a reference for a full custom design after initial design creation in Vivado:

  1. Create your Hardware design in Vivado, or modify one of the provided pre-configured designs

  2. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#Generate-System-Device-tree-directory-from-the-XSA

  3. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#1.-Setting-up-the-Yocto-Environment-to-build-the-embedded-software-images

  4. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#2.-Generating-the-Yocto-Machine

  5. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#3.-Building-the-image

  6. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#1.-Writing-the-Image-to-the-boot-media

  7. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#2.-Booting-the-Board-to-Linux

  8. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#3.-Loading-the-pre-built-Programable-Logic-(PL)-firmware

How to update/change the Embedded Software Image

This section is for users who want to modify the simple existing configuration, adding rootfs contents, applying patches, changing SRCREV, BRANCH, SRCREPO etc.

Yocto Environment Setup:

Refer to https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3138977818#Setting-up-the-Yocto-Environment-to-build-the-embedded-software-images

Applying software component patches to the existing recipe:

  1. Create the layer
    $ bitbake-layers create-layer ../sources/meta-user

  2. Add the layer 
    $ bitbake-layers add-layer ../sources/meta-user

  3. Create the files directory in the below path
    $ mkdir -p ../sources/meta-user/recipes-apps/ssw_components/files

  4. Copy patches to the files directory
    $ cp -rf 0001.example.path sources/meta-user/recipes-apps/ssw_components/files

  5. Add the patch to the SRC_URI variable .

    Please see the below ssw_component.bbappend as an example:

    FILESEXTRAPATHS:prepend := "${THISDIR}/files:"

    SRC_URI:append = "  file://0001.example.path"

Customizing the Root File System

The linked content below has mapped steps in Yocto (and equivalent in PetaLinux) to customize the Rootfs if a user wants to include prebuilt applications, or create / add custom applications, creating and adding custom kernel modules, application auto run at startup, adding layers, adding an existing recipe into the root file system and adding a package group

https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/2787311617/PetaLinux+to+Yocto+-+Command+Cross+Reference#Customizing-the-Root-File-System

Building from Remote SRC

vim sources/meta-user/recipes-apps/ssw_components/ssw_component.bbappend 

REPO = "https://github.com/Xilinx/embeddedsw "

BRANCH = "master"
SRCREV = "33750739285bbc8e51355b91cd274a1651e97e8f"

Building from local external repo

# Build  components from external src.
INHERIT += "externalsrc"
EXT_SRC_ESW = "<local repopath >"
EXTERNALSRC:pn-plm-firmware = "${EXT_SRC_ESW}"
EXTERNALSRC_BUILD:pn-plm-firmware = "${EXT_SRC_ESW}/build/plm"
SRCREV:pn-plm-firmware = "84be8c5dce9325e9f7bb4279682c18846ecfd3b3"

Skip Packaging Boot components into boot.bin

Add the below line to the local.conf file to skip packaging boot components into boot.bin 

BIF_PARTITION_ATTR:remove = "plmfw"
BIF_PARTITION_ATTR:remove = "psmfw" BIF_PARTITION_ATTR:remove = "fsbl" BIF_PARTITION_ATTR:remove = "pmufw"

Optimizing the build time

The Yocto build infrastructure has smart caching of downloads and build state. This allows for a build to be vastly accelerated in cases where the artifacts have previously been built in the exact configuration required.
In order to take advantage of this, the build infrastructure allows the user to point to centralized SSTATE and Download directories or servers.

Add the below lines to conf/local.conf

# Add Pre-mirrors  SOURCE_MIRROR_URL = "https?://.*/.*  https://petalinux.xilinx.com/sswreleases/rel-v2024.2/downloads" # Sstate mirror settings SSTATE_MIRRORS = " \         file://.* https://petalinux.xilinx.com/sswreleases/rel-v2024.2/aarch64/sstate-cache/PATH;downloadfilename=PATH \n \

How to build and boot using QEMU

To use QEMU you must first build the QEMU binaries, as they are not generated by default.

1 Build the qemu-helper-native package to set up QEMU network tap devices

$ MACHINE=versal-vek280-sdt-seg bitbake qemu-helper-native For ZCU104 zynqmp-zcu104-sdt-full

2 Now you can simulate the image using the QEMU emulator 

Note: To terminate qemu, the normal command is Ctrl-a x

$ MACHINE=versal-vek280-sdt-seg runqemu nographic slirp For ZCu104 zynqmp-zcu104-sdt-full

 

Yocto Layer descriptions

AMD provides a number of "meta-" layers that users can add to their Yocto project(s) in order to allow users of AMD hardware access to the following: initial bootloaders (FSBL/PLM) and their build tooling, AMD's vendor branch of U-Boot, AMD's vendor branch of the Linux kernel, tooling to combine and modify open source components, and recommended defaults for open source components.  

These layers are described in detail at this wiki page

meta-amd-adoptive-socs:

https://github.com/Xilinx/meta-amd-adaptive-socs

Layer containing the AMD Adaptive SoC's BSP features such as evaluation board  machine configuration files, multiconfig files, multiconfig device tree files, boot pdi, kernel configuration fragments, series configuration compiler(.scc) files etc.

Future content to follow:

  • Example Hardware Design Configuration, and links to design methodology for Segmented Configuration enabled designs.

  • Image Layout, embedded software stack content, kernel configuration, rootfs content, Yocto Layers

  • Basic Yocto How To targeting AMD Hardware and Embedded Software Development flows

  • Alternative or additional PL payloads, Xen, OpenAMP

Related Links

 

Related content

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy