Custom Hardware Development AMD ZynqMP™ device portfolio

Custom Hardware Development AMD ZynqMP™ device portfolio

This is a getting started guide providing walk through style examples using the AMD Embedded Development Framework (EDF) covering initial board setup and running a pre-built disk image, hardware and software applications development using the pre-built image, and full custom hardware & software builds and image creation.

Table of Contents

Custom Hardware Development

A hardware developer can create one or more hardware (HW) designs, resulting in one or more firmware bundles (pl.bin). The firmware can be integrated with or built with the OS and boot images.

A software application can interact with the hardened and/or adaptive features of the hardware design. We will differentiate between the base HW design firmware which is baked into the boot image, and application-specific incremental HW design firmware that is loaded into the PL at Linux run-time in support of a specific application.

The following tutorials show how to generate a base HW design and its firmware package and two examples of application-specific HW designs and firmware. One is a Vivado™ Design Suite based design (RTL), the other a Vitis™ software platform-based design. We also show how to generate various handoff artifacts needed when moving from HW to the SW or the OS domain.

Typically, a hardware design team provides a new custom base HW design / Vivado™ Design Suite XSA that matches their product requirements, for example, a custom board or even a static PL design if the production image has a fixed function. The XSA is handed off to the OS development team which then generates a new Yocto Project™ machine configuration based on this base XSA using the provided sdtgen and gen-machine-conf tooling as a starting point (see https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3164209163).

The EDF Linux® distribution (distro) can be used as a starting point for the OS build, but the end-user might also opt to use the upstream Yocto Project poky distro or their own custom distro for their final product based on their product requirements. Customizing the OS configuration might require expert knowledge. Some common items are covered here https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3250586081 , and there is extensive documentation available online for this purpose (Yocto Project Mega-Manual ). Similarly, creating and customizing your own Vivado Design Suite/RTL design might require expert knowledge and there is extensive documentation available on this subject.

image-20250328-112709.png

EDF Hardware Design Repository Overview

The following gives an overview of the hardware design repository which provides the baseline for the Yocto Project™ machine configuration used in this tutorial.

Note: The below set of example commands are for a ZCU104 based board design. For other Zynq boards, use the appropriate board names.

$ git clone git@github.com:Xilinx/amd-yocto-hw-platforms.git -b xlnx_rel_v2025.2 $ cd amd-yocto-hw-platforms $ tree eval_board_base/zcu104_base/ eval_board_base/zcu104_base/ ├── Makefile └── scripts ├── gen_sdt.tcl └── main.tcl

The directory structure lists the base design that can be built from the EDF Hardware Design repo:

<board name>_base - This is the embedded common platform CED design mapped to a target evaluation board, it enables deferred PL load which allows for deployment of a common boot PDI (loaded from the primary boot device by the EDF Boot Firmware) and a PL .bin loaded at Linux runtime (deferred PL load).


The embedded common platform CED includes a minimal PL payload that has some basic peripherals mapped to the evaluation board: PL UART, DIP switch, push buttons, and user LEDs via a pair of AXI GPIO controllers, and a simple BRAM controller. The PL payload firmware (PL PDI, and device tree PL .dtbo) will be loaded after Linux boot from the user space.

AMD EDF 25.05 - AMD Vivado™ Design Suite 2025.2

  • ZCU104 - The base design does not include a default PL payload

    • This is scheduled to be updated in the next release

    • The VCU example can be used as an example payload zcu104_pl_vcu_extensible

A top-level Makefile is provided for easy build orchestration. Each of the above-mentioned designs can easily be built with a single make command. The make command builds a series of artifacts:

  1. The hardware design which typically ends with exporting a Vivado™ Design Suite XSA.

  2. The system device tree (SDT) artifacts directory which is needed for handoff to the software build flow - Yocto Project™ build

  3. A firmware (FW) bundle which is needed to dynamically load the PL designs from Linux

While the Makefile is convenient, we will also provide step-by-step instructions for running the respective commands by hand.

The following design examples are compatible with the AMD Vivado™ Design Suite 2025.1 EA tools version.

Vivado base design and base platform

The following tutorial shows how to generate the Vivado™ Design Suite base design XSA from a Tcl based project and how to generate the system device tree handoff artifacts that are needed to set up the Yocto build.

Refer to EDF Hardware Design Repository Overview for setting up the hardware design repo and for setting up the tools.

From the top-level directory, we use make to build the base design project from Tcl. This step will take some time as we are running implementation to generate the routed dcp which is used later by pr_verify for checking design compatibility for segmented config when creating alternative compatible PL payloads.

Additionally the build reads an NOC solution file from the Vivado install area for the CED to re-seed the NOC solution, and checks the unique ID of boot.pdi for the pre-built Vivado CED against the newly generated boot.pdi to confirm if it has same unique ID or not. This will ensure the compatibility between the pre-built boot firmware and users who created their custom design from the Vivado CED (and can then deploy the PL image directly to a system running the re-built boot firmware and boot image).

Note: See output directory for .dcp file used to check for compatibility of designs in the Segmented Configuration flow (pr_verify)

The command below uses JOBS=8 to run 8 parallel compiles, with each compile spawning multiple threads, where the maximum number of threads is by default limited to the number of CPU cores detected. This will requires significant RAM and Processor resources.

For example : JOBS=8 - The number of processes = 8 x <number of cores>, with matching RAM requirements.

  • If you are running on a system with limited resources, it is recommend to set JOBS=1

$ cd eval_board_examples/zcu104_pl_vcu_extensible/ $ make all JOBS=8

Note: The above example command is for a ZCU104 board. For other Zynq boards, use the appropriate board names.

The make command generates the following artifacts inside the base/ folder:

  1. Project directory hw_project - The folder contains the Vivado .xpr project file which can be used to open and edit the project in the Vivado Design Suite

  2. SDT artifacts directory hw_project_sdt - The SDT directory is used as handoff artifact to set up the Yocto Project™ disk image build

You can now open the generated Vivado Design Suite project to view the generated design:

  1. Open the generated xpr project file from the command line (See the Vivado Design Suite documentation for other ways to open the project)

$ cd hw_project/zcu104_pl_vcu_extensible/ $ vivado zcu104_pl_vcu_extensible.xpr &

 

Screenshot of the ZCU104 VCU Extensible platform block design

The output of the generated sdt artifacts directory looks as follows. You can see that the pl.bin file were generated for the deferred load flow: 

  1. zcu104_pl_vcu_extensible.bit - This is the PL PDI used to load the PL. This file is one of the input files to create the firmware bundle to load the PL from the Linux user space or U-Boot, and can be added to the rootfs build or copied directly to the running system. See https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3250585728 for step to create the firmware bundle.

Note

https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3250585728/Getting+Started+-+Walkthrough+Examples#Custom-Hardware-Development expects:

  • SDTGen to have been run to generate a System Device Tree directory (containing the PL.PDI and hardware project derived Device Trees)

  • <system Device Tree Directory>/pl.dtsi (PL Device Tree) to have been manually updated to reflect PL content if needed

  • gen-machine-conf to have been run to populate the Yocto Project build directory

See https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3250586337

The Yocto Project build recipes (created by recipetool-create) then handle the file conversion, Device Tree Overlay and integration into the root fs as part of the Image build process

The firmware package could also be created manually, and/or copied directly to the running image for testing using SCP, tftp, or via SD Card.

 

SDTGen is run as part of the build process in the Makefile. The contents of the System Device Tree Handoff Directory are shown below.

It contains a series of device tree files with system-top.dts being the top-level device tree.

Note: The below folder structure is for a ZCU104 based CED design. For other Zynq boards, the file names might vary.

$ tree . ├── extracted │ └── zcu104_pl_vcu_extensible_wrapper ├── include │ └── dt-bindings │ ├── clock │ │ ├── xlnx-versal-clk.h │ │ ├── xlnx-versal-net-clk.h │ │ └── xlnx-zynqmp-clk.h │ ├── dma │ │ └── xlnx-zynqmp-dpdma.h │ ├── gpio │ │ └── gpio.h │ ├── input │ │ ├── input.h │ │ └── linux-event-codes.h │ ├── interrupt-controller │ │ ├── arm-gic.h │ │ └── irq.h │ ├── net │ │ ├── mscc-phy-vsc8531.h │ │ └── ti-dp83867.h │ ├── phy │ │ └── phy.h │ ├── pinctrl │ │ └── pinctrl-zynqmp.h │ ├── power │ │ ├── xlnx-versal-net-power.h │ │ ├── xlnx-versal-power.h │ │ ├── xlnx-versal-regnode.h │ │ └── xlnx-zynqmp-power.h │ └── reset │ ├── xlnx-versal-net-resets.h │ ├── xlnx-versal-resets.h │ └── xlnx-zynqmp-resets.h ├── pcw.dtsi ├── pl.dtsi ├── psu_init.c ├── psu_init.h ├── psu_init.html ├── psu_init.tcl ├── psu_init_gpl.c ├── psu_init_gpl.h ├── system-top.dts ├── zcu104-reva.dtsi ├── zcu104_pl_vcu_extensible.bit ├── zcu104_pl_vcu_extensible.mmi ├── zynqmp-clk-ccf.dtsi └── zynqmp.dtsi

Generating the sdt handoff directory by hand using the SDTGen tool

The following tutorial shows the steps for generating the sdt handoff directory by hand using the SDTGen tool

bash-4.4$ sdtgen sdtgen% set_dt_param -debug enable sdtgen% set_dt_param -zocl enable sdtgen% set_dt_param -dir ./hw_project_sdt2 sdtgen% set_dt_param -xsa hw_project/zcu104_pl_vcu_extensible/outputs/zcu104_pl_vcu_extensible.xsa sdtgen% set_dt_param -board_dts zcu104-reva sdtgen% generate_sdt

You can inspect the generated output in the hw_project_sdt2 directory which should match the output of the hw_project_sdt directory generated as part of the make command.

Generating a new Yocto Project Machine Configuration based on the Vivado Design Suite Project (xsa)

This tutorial shows how to generate a new machine configuration based on a Vivado project, as would be done for a custom board using the xsa, SDT handoff directory and the gen-machine-conf tool.

When importing the sdtgen output or System-Device-Tree into gen-machine-conf, make sure to start a new shell and set up the Yocto/gen-machine-conf environment separately.
Never mix environments between Vivado, Yocto, and PetaLinux, doing so can lead to unpredictable behavior.

Note

The previous tutorial (https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3268149250 ) generated a disk image using a prebuilt Yocto Project™ machine configuration based on cached SDT artifacts.

  1. Setup the build environment and Yocto Project™ pre-requisites by following https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3268149250/Operating+System+Integration+and+Development+AMD+ZynqMP+device+portfolio#Yocto-Project%E2%84%A2-build-setup-instructions-for-EDF

  2. Create a new bsp layer for your custom xsa and add this new layer to build as shown below.

    bitbake-layers-create-add-layer

    $ bitbake-layers create-layer <path-to-layer>/sources/meta-custom-bsp $ bitbake-layers add-layer <path-to-layer>/sources/meta-custom-bsp
  3. Once custom bsp layer is created and below is the content of the newly created layer. User can remove recipes-example directory as these recipes is only for references and it will not be used in EDF builds.

    meta-custom-bsp

    $ tree ../sources/meta-custom-bsp/ ../sources/meta-custom-bsp/ ├── conf │   └── layer.conf ├── COPYING.MIT ├── README └── recipes-example │   └── example │ └── example_0.1.bb
  4. Run the gen-machine-conf help commands for detailed usage - this step is optional. For more information, refer to gen-machine-conf/docs/options.rst at main · Xilinx/gen-machine-conf

    gen-machinconf-help:

    $ gen-machine-conf -h
  5. Generate a Yocto Project™ machine configuration using the gen-machine-conf tool as shown below. The primary input to gen-machine-conf is the SDT handoff directory from the Vivado Design Suite base design, see Vivado base design and base platform for steps on how to generate the SDT directory. Internally it will call lopper tool to prune dts files.

  1. Make sure you run gen-machine-conf from inside the yocto build/ directory, so that the generated configuration files are in the right place. Alternatively, you can place the generated machine configuration files into the conf directory of an existing or new meta-layer.

  2. The <machine-name> should be unique,  If you are using a custom XSA (non-CED-based design) on an evaluation board, use the --template option with the -O <pre-built eval board machine name> to obtain the evaluation board machine override. The user must ensure that the load address values are adjusted to match the custom design.

  3. If you have previously executed gen-machine-conf with one sdtartifacts, then it is recommended to remove the hw_description and outputs directory before you run gen-machine-conf with different sdtartifacts.

ZynqMP:

zynqmp-sdt-machine-file-example

# Using custom xsa for eval boards. In this example we are using ZCU104 eval board. # Using --template option $ gen-machine-conf parse-sdt --template sources/meta-amd-adaptive-socs/meta-amd-adaptive-socs-bsp/conf/machineyaml/zynqmp-zcu104-sdt-full.yaml --hw-description <path-to-sdt-handoff-dir> -O zynqmp-zcu104-sdt-full (OR) # Using full command $ gen-machine-conf parse-sdt --hw-description <path-to-sdt-handoff-dir> \ -c <path-to-layer>/sources/meta-custom-bsp/conf -g full --machine-name <machine-name> \ --add-config CONFIG_YOCTO_BBMC_CORTEXR5_1_FREERTOS=y \ --add-config CONFIG_YOCTO_BBMC_CORTEXR5_0_BAREMETAL=y \ --domain-file ../sources/meta-xilinx/meta-xilinx-standalone-sdt/conf/domainyaml/openamp-overlay-zynqmp.yaml \ -O zynqmp-zcu104-sdt-full
  1. The newly generated machine configuration is placed in the build directory, and it supersedes the machine configuration available in existing Yocto Project meta layers. If you generate a new machine for a custom design, it is best practice to use a unique machine-name that does not conflict with any of the existing machines available in our meta layers.

  2. Build Linux and BOOT.bin images the same way as described in https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3268149250/Operating+System+Integration+and+Development+AMD+ZynqMP+device+portfolio#EDF-Linux%C2%AE-disk-image-build-using-a-prebuilt-Yocto-Project-machine-and-pre-built-Vivado-artifacts-(BIT)---SDCARD-boot

Related Links

 

Trademarks

Yocto Project and all related marks and logos are trademarks of The Linux Foundation. This website is not, in any way, endorsed by the Yocto Project or The Linux Foundation.

Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.

© 2025 Advanced Micro Devices, Inc. Privacy Policy