Frequently Asked Questions on AMD EDF

Frequently Asked Questions on AMD EDF

Frequency asked questions, and answers

Does AMD EDF replace the PetaLinux flow and existing AMD Yocto Project™ flows?

Yes, AMD EDF is an evolution of AMD Yocto Project™ flows, and replaces the PetaLinux tooling and flows.

Which device families will be supported by AMD EDF?

All device families are supported, except AMD MicroBlaze™ processors (classic).

Will the hardware platform used for EDF BSPs be available?

Yes, the hardware platforms are available for download from Index of / and is implemented as an AMD Vivado™ Design Suite Configurable Example Design with additional TCL scripted additions. See the Common Specifications section of the EDF Documentation for more details.  

What is the difference between EDF-based BSPs and PetaLinux BSPs; can the boot components generated using these be interchanged?

EDF Linux® BSP provide similar functionality to a PetaLinux BSP – Boot firmware, bootloaders, Linux OS, and Linux Root file system. However, the Kernel Configuration and rootfs are richer. AMD recommends re-generating the full stack within EDF to ensure compatibility.

Can existing bootloader, kernel patches, and rootfs customizations from PetaLinux be re-used?

Existing patches can be re-used if compatible with the version of the kernel / components. RootFS customizations can be migrated using the Yocto Project methods for extending the rootFS configuration. See the development flows section of the EDF Documentation for more information.

How will FPGA integration (e.g., bitstream loading) be managed in Yocto?

Existing standard flows for FPGA bitstream loading are supported in EDF, FPGA Util, and DFX. Manager utilities are included in the default EDF Linux® disk image.

How does EDF handle SDK generation for application development?

An application SDK is released with each EDF release, however, it is recommended to generate your own SDK from within the EDF Yocto Project environment to ensure the SDK is in line with any user customizations. See the Operating System Integration sections of the EDF Documentation for more information.

How can I integrate Vivado Design Suite generated artifacts (bitstream, DTB) into EDF Yocto Project™ based builds?

Existing flows are supported, with build recipe templates and helper functions available for creating firmware bundles for integration into the EDF Yocto Project based images,  or can also be created manually. For more information, see the Hardware Application development, Application Deployment, and Segmented Configuration or Dynamic Function eXchange sections of the EDF documentation.

Which PetaLinux features are not available in the vanilla Yocto Project?

AMD device-specific functionality and drivers may not be available in vanilla Yocto and the upstream Linux Kernel, but are included in AMD EDF via custom layers (similar to PetaLinux). AMD is committed to upstreaming, but there is generally a delay between initial release of a feature and its availably in upstream repos.

How can I migrate PetaLinux rootfs customizations (e.g., init scripts) into my EDF build?

See the Editing the RootFS configuration in the EDF or Yocto Project documentation.

Are there Yocto Project layers equivalent to `meta-petalinux` for FPGA workflows?

Yes, meta-amd-edf and other custom meta-layers provide this functionality. These are all included in the AMD EDF Yocto Project environment:  $ repo init -u GitHub - Xilinx/yocto-manifests: Repo Manifests for the Yocto Project Build System -b refs/tags/<AMD-EDF release tag> -m default-edf.xml  See the Downloads and Release notes, and Operating System Integration sections of the EDF documentation for more information.

What’s the recommended migration path for existing products and projects?

See the Migrating a Project from PetaLinux to EDF section of the EDF documentation for more information.

What is the recommended starting point for new users who want to use EDF? 

We recommend the Getting Started – Walk through examples and Development Flows sections of the EDF documentation, with Common, Device, and board specific and related available in the relevant sections of the EDF documentation.

How is the lifecycle and release of EDF managed, and how does it align to Vivado Design Suite releases?

AMD EDF currently releases on the same timeline as Vivado Design Suite releases, but as a separate component. EDF is validated against the Vivado Design Suite release, with pre-compiled artifacts updated. EDF release naming uses a YY.MM.<update> scheme. For example, EDF 25.05 aligns with the Vivado Design Suite 2025.1 release, and EDF 25.05.1 aligns with the Vivado Design Suite 2025.1.1 release.

How to include External CDO Files in Final Image Using Bootgen utility?

Overview

This document outlines the steps to generate a Programmable Device Image (PDI) and integrate external Configuration Data Object (CDO) files into the final boot image for Versal and Versal-2ve-2vm platforms using the Bootgen tool.
CDO files are typically used for configuring hardware or system settings during the boot process on Versal and Versal-2ve-2vm platforms.

Prerequisites

  • Bootgen utility installed (part of Vivado tools).

  • install Vivado tool and then source script

source <path-to-Vivado>settings64.sh
  • Access to:

    • External CDO file(s)

ls hw_project/vek385_revb_base/vek385_revb_base.runs/impl_1/gen_files/ ai_engine_data.cdo cdo_gen.tcl mmi_config.cdo bd_1416_ps11_0_0_fpd_data.cdo fpd_data.cdo plm.elf bd_1416_ps11_0_0_lpd_data.cdo lpd_data.cdo pmc_data.cdo bd_1416_ps11_0_0_pmc_data.cdo metadata.cdo versal_comn_platform_ai_engine_0_0_ai_engine_data.cdo
  • A Linux shell or Windows command prompt.

Input Files

The required input files are located in the /hw_project/vek385_revb_base/vek385_revb_base.runs/impl_1/gen_files/ directory:

  1. CDO Files:

    • ai_engine_data.cdo: Configuration data for the AI Engine.

    • fpd_data.cdo: Configuration data for the Full Power Domain (FPD).

    • lpd_data.cdo: Configuration data for the Low Power Domain (LPD).

    • pmc_data.cdo: Configuration data for the Platform Management Controller (PMC).

    • metadata.cdo: Metadata for the platform.

    • Other CDO Files: Additional configuration files for specific hardware components.

  2. Bootgen Configuration File:

    • /hw_project/vek385_revb_base/vek385_revb_base.runs/impl_1/versal_comn_platform_wrapper_boot.bif: The Bootgen Input File (BIF) that defines the structure of the PDI.

Bootgen Command

The following command is used to generate the PDI:

bootgen -image versal_comn_platform_wrapper_boot.bif -arch versal -w -o base_nov12_aiecdo.pdi

Command Breakdown:

  • -BIF_File.bif: Specifies the BIF file that defines the components to include in the PDI.

  • -arch versal/versal_2ve_2v: Specifies the architecture of the target platform (versal, versal_2ve_2v).

  • -w: Enables warnings during the PDI generation process.

  • -o Output_file.pdi: Specifies the output file name for the generated PDI.

Steps to Generate the PDI

1. Prepare the Input Files

  • Ensure all required files (CDO files, PLM ELF file, etc.) are present in the gen_files/ directory.

2. Verify the Bootgen Configuration File (boot.bif)

  • Open the versal_comn_platform_wrapper_boot.bif file and ensure it correctly references the input files.

Example structure:

new_bif: { id_code = 0x04db0093 extended_id_code = 0x01 id = 0x2 boot_config {smap_width=32} image { name = pmc_subsys id = 0x1c000001 partition { id = 0x01 type = bootloader file = gen_files/plm.elf } partition { id = 0x09 type = pmcdata, load = 0xf2000000 file = gen_files/pmc_data.cdo } } image { name = lpd id = 0x4210002 partition { id = 0x0C type = cdo file = gen_files/lpd_data.cdo } } image { name = fpd id = 0x420c003 partition { id = 0x08 type = cdo file = gen_files/fpd_data.cdo } } image { name = pl_cfi id = 0x18700000 partition { id = 0x05 type = cdo file = versal_comn_platform_wrapper_boot_markers.rnpi } } image { name = asufw id = 0x1C000002 partition { id = 0x0F type = cdo file = static_files/asu_data.cdo } partition { id = 0x0B core = asu file = static_files/asufw.elf } } image { name = aie2_subsys id = 0x421c028 partition { id = 0x07 type = cdo file = gen_files/ai_engine_data.cdo } } }

3. Run the Bootgen Command

  • Execute the bootgen command to generate the PDI:

bootgen -image versal_comn_platform_wrapper_boot.bif -arch versal -w -o base_nov12_aiecdo.pdi ****** Bootgen v2025.2 **** Build date : Nov 12 2025-13:58:15 ** Copyright 1986-2022 Xilinx, Inc. All Rights Reserved. ** Copyright 2022-2025 Advanced Micro Devices, Inc. All Rights Reserved. [INFO] : Bootimage generated successfully

4. Verify the Output

  • Check the generated PDI file (base_nov12_aiecdo.pdi) to ensure it includes all the required components.

5. Test the PDI

  • Flash the PDI onto the target Versal/Versal_2ve_2v platform and verify its functionality.

Recommended

  • File Placement: Always place the input files in a directory accessible by the bootgen tool.

  • Version Compatibility: Ensure the input files (CDO, PLM ELF, etc.) are compatible with the hardware and software versions used in your project.

  • Error Handling: If bootgen fails, check the paths and syntax in the boot.bif file.

What tooling will replace `petalinux-build` workflows? 

Bitbake + Gen-Machine-Conf + SDT -

Software Hardware Exchange Loop (SHEL) Flow

GitHub - Xilinx/gen-machine-conf: Parse hardware design information to generate project specific machine configuration

system-device-tree-xlnx/README.md at master · Xilinx/system-device-tree-xlnx

There are some adjustments with 2025.2 with gen-machine-conf installing yaml files for configuration for builds:

Use a dedicated Yocto layer stack (for Xilinx targets) such as:

  • meta-xilinx

  • meta-xilinx-tools

  • meta-amd-adaptive-socs and meta-xilinx-restricted (board-specific layers)

Replace PetaLinux’s hardware flow with:

  • SDT/XSA integration via Yocto layers and device-tree fragments

  • FPGA Manager integration through kernel config and DTB fragments

Boot sequence managed by U-Boot in Yocto (instead of PetaLinux boot scripts)

Are there features in PetaLinux that are hard to replicate in Yocto?  

Yes, there are some PetaLinux features that are notably hard to replicate directly in Yocto Project. The main differences arise from tightly integrated hardware/design handoff, boot flow automation, and vendor-optimized tooling that PetaLinux provides out of the box.

Integrated hardware design and flow

  • What’s Different: PetaLinux ships with a tightly coupled Vivado/ export (XSA/SDT) and automated DTB handling, enabling a near-seamless path from hardware design to software. This flow is replicated using the SHEL flow (SDTGen, Lopper, gen-machine-conf) and direct cloning of the Yocto Project resource (eg, using the repo tool).

  • Software Hardware Exchange Loop (SHEL) Flow

GUI-driven tooling and workflow

How do we validate FPGA-specific features after migration?   

Advised Approach:

  • Confirm bitstream (or PDI/CDO data) loads reliably at boot or on demand.

  • Verify DTB bindings and FPGA peripherals are visible to the kernel.

  • Ensure FPGA-enabled applications run correctly and meet performance targets.

  • Establish repeatable pass/fail criteria and a rollback plan if validation fails.


This tends to be designer preference, but here is a suggested, high-level test plan:

  • Test: Boot-time bitstream load

Steps: Boot image; loader runs; verify bitstream loaded flag in logs.

  1. Pass criteria: bitstream load reported as successful within N seconds; FPGA peripheral accessible.

  • Test: Bitstream integrity

  1. Steps: Load bitstream; read version/hash from a known register or file.

  2. Pass criteria: hash matches expected value; version matches the intended design.

  • Test: FPGA peripheral accessibility

  1. Steps: Run a small kernel/user-space test that exercises a DMA or accelerator.

  2. Pass criteria: data transfers complete with expected results; no crashes.

  • Test: DTB correctness

  1. Steps: Inspect kernel logs for FPFA/DTB bindings; enumerate devices from /dev/mmaps or sysfs.

  2. Pass criteria: all expected devices appear and are functional.

How do we handle Vivado handoff files in Yocto builds?    

  • Export hardware from Vivado

  1. Complete your Vivado design and export the hardware package as an .xsa file.

  2. How: In Vivado, use File > Export > Export Hardware or run the write_hw_platform Tcl command.

  3. What you get: An .xsa that contains the bitstream, hardware description, and metadata needed for software integration.

  • Generate SDT artifacts from the .xsa

  1. Use the .xsa to generate the System Device Tree (SDT) artifacts directory.

  2. Tool: sdtgen (System Device Tree generator) creates the kernel/bootloader-ready device-tree pieces.

  3. Result: A directory with the device tree (and related files) tailored to your hardware.

  • Create a Yocto machine configuration from SDT

  1. Produce a Yocto machine config from the SDT artifacts.

  2. Tool: gen-_machine-conf transforms the SDT outputs into a reusable Yocto machine setup.

  3. Result: A machine configuration that reflects your specific hardware (SoC, peripherals, and bus topology).

  • Integrate SDT into the Yocto build


10. Add the generated machine configuration to your Yocto project.
11. Actions:
    Include the new machine in bblayers.conf (as needed).
    Create or adjust layers/recipes to incorporate the SDT-derived device-tree and hardware specifics.
12. Ensure the configuration is consistent with your kernel, bootloader, and FPGA flow.

Build the Yocto image

  • Build with BitBake using the new machine configuration.

  • Command: bitbake [your-machine-name]

  • Outcome: A kernel, bootloader, and root filesystem that integrate the Vivado hardware description from the .xsa.

follow for more:Custom Hardware Development AMD Versal™ device portfolio and Custom Hardware Development AMD ZynqMP™ device portfolio

Will we need to maintain custom Yocto layers for Xilinx-specific features?    

Yes, flexibility of maintenance for custom Yocto layers for AMD AECG-specific features, especially if you use board-specific BSPs, FPGA integration, and boot flows – is a key benefit for users because Yocto Project workflows are based on open source infrastructure and tooling.

Base and augmentation layers

  • Start from Xilinx-provided layers (e.g., meta-xilinx, meta-xilinx-tools, and board-specific layers like meta-amd-adaptive-socs or meta-xilinx-restricted) as the foundation.

  • Create a dedicated board-layer (e.g., meta-yourboard) that contains machine config, custom kernel/boot patches, DT fragments, and FPGA/loaders. This keeps hardware-specific changes isolated and reusable.

What to put in your custom layer

  • Machine configurations and distro-feature tweaks (can be automated with gen-machine-conf)

  • Patches or defconfig changes for the kernel and U-Boot

  • Device-tree fragments and any FPGA manager/bitstream handling

  • Bitstream/DTB packaging recipes and the FPGA loader (kernel module or systemd service)

  • Any startup scripts, userspace utilities, or SDK customizations tied to your hardware

Maintenance approach

  • Align your custom layer with Yocto release cycles and the versions of the AMD AECG layers you depend on Version your hardware changes (bitstreams, XSA/SDT, DT bindings) and tag them with the target MACHINE

  • Use CI to validate image builds across updates to kernel, bootloaders, and FPGA-related recipes

Risks and mitigations

  • Layer compatibility: pin to compatible versions of meta-xilinx and the kernel to avoid conflicts

  • Rebase effort: plan for periodic rebases when OEM layers update, and automate patch application

  • Licensing: track licenses for vendor layers and ensure redistribution compliance in your layer

Are there community or vendor-supported Yocto layers for Xilinx?     

Yes. There are both vendor-supported and community-supported Yocto layers for Xilinx targets.

Vendor-supported layers

  • meta-xilinx: Core Xilinx BSPs, kernels, bootloaders, and basic FPGA/DT work for Zynq/ZynqMP/Versal.

  • meta-xilinx-tools: Build tooling and utilities used with Xilinx layers.

  • Board-specific layers (e.g., meta-amd-adaptive-socs and meta-xilinx-restricted): Targeted support for particular boards and SoCs, often with aligned boot flows and DT changes.

  • These layers are typically maintained by Xilinx or official partners and come with vendor support options.

Community-supported layers

  • Often built on top of meta-xilinx and provide additional recipes, patches, or boards not covered by the official layers.

  • Maintained by the open-source community or third-party contributors; update cadence can vary.

FPGA communication over PCIe, what would you recommend for user space driver? 

The recommendation on the device driver largely depends on the system architecture and the kind of DMA (if any) that is being used on the FPGA side.

Assuming a system topology where the x86 is the host and the FPGA is the target (or “card” side), and assuming a higher performance data path where a DMA is required, the FPGA DMA type is typically either QDMA (hard or soft) or XDMA (“DMA/Bridge Subsystem for PCIe”).

Some further info and the associated host-side drivers for these DMA IPs:

QDMA IP (hard or soft)

  • Linux kernel driver (KMOD + user utils) for interrupt-driven, standard kernel flow, sysfs control, etc. Good general default. See QDMA drivers

  • DPDK Poll-Mode Driver (PMD) for user-space, polled, very-high-throughput/low-latency data paths that is compatible with the Data Plane Development Kit (DPDK); uses UIO/VFIO underneath. Choose this when you want kernel-bypass style dataplane performance. QDMA DPDK Driver Info

  • Notes: QDMA supports multi-queue, MSI-X, and SR-IOV (PF/VF queue ownership & mailbox). If you need virtualization/tenant isolation, confirm SR-IOV is enabled in the IP and match with appropriate PF/VF drivers. See SR-IOV Support in QDMA

XDMA IP (“DMA/Bridge Subsystem for PCIe”)

  • XDMA Linux kernel driver (character-device interface) from AMD’s dma_ip_drivers repo; historically the common choice, but XDMA is becoming less common, particularly in Versal (for which many devices integrated a hardened QDMA, or choose soft QDMA for performance reasons). For XDMA, see: See AMD PG195

  • Upstreamed Linux XDMA driver (dmaengine) is now in mainline kernels (no char-dev ABI; uses the dmaengine framework). Choose this if you want an upstream-friendly path and you’re on a new enough kernel. Linux Kernel Mailing List citation

In recent technologies like Versal, QDMA has notable advantages in areas like hardened support (e.g., via the CPM5/QDMA Versal hard block), robust streaming and memory-mapped support with good Linux driver stability (e.g., AXI-MM), recent performance characterization with DPDK and generally more complete documentation.

For simpler or lower-performance PCIe applications that might not require a DMA at all, or require only simple DMA usage that is not QDMA or XDMA, such as in control plane applications, there are some alternative approaches.

For control path, UIO is often the best choice for simple devices (e.g., memory-mapped I/O, polling-based communication).

Advanced features like DMA, interrupt handling, and complex synchronization are sometimes harder or impossible to implement in user space, and therefore a customer Linux kernel driver should be considered. One such example where one might implement this is a PCIe-based driver for an I/O expansion FPGA where multiple I2C or SPI devices needs to be controlled on the target FPGA side over PCIe. In these cases, the FPGA-side Linux device drivers can often serve as a starting point for integrating the required functionality into a host-side PCIe driver.

Any licensing or support implications?  

  • When using Yocto, there are generally no licensing or support implications, as Yocto is an open-source project. It is non-proprietary, meaning it can be freely used without concerns about licensing restrictions.

  • For installation tooling, refer to the EDF release notes, which provide guidance on setting up the necessary tools. You can find more information and download resources for embedded design tools at the following link: Downloads

  • Additionally, AMD supports the automated generation of Software Bill of Materials (SBoM) and CVE (Common Vulnerabilities and Exposures) report creation. This is achieved using standard community tooling that integrates seamlessly with the Yocto Project.

What are the known migration myths?  

This is a big change from before. Here are some common myths and how to mitigate them
Top 3:

Format:

  1. Myth: Shifting from menuconfig-driven PetaLinux workflows to command-line, recipe-driven Yocto workflows is harder.

  2. Mitigation: Yocto Project industry standard has a wealth of information available for – from the Yocto Mega-Manual on the Yocto Project documentation page to YouTube videos, and many industry consultants. We recommend investing in Yocto training, creatinge starter templates that can be reused, and reference the documentation mapping PetaLinux concepts to Yocto equivalents.

Debug:

  1. Myth: Tracing boot issues, bitstream loading, and kernel-device-tree interactions can be harder without PetaLinux tooling.

  2. Mitigation: Yocto Project provides a wealth of automation and debug tooling. Many debug or loading features that PetaLinux provided were actually wrappers around Yocto Project equivalents.

Build System Complexity:

  1. Myth: Learning and harmonizing BitBake terminology, layer management, and patch workflows can slow initial adoption.

  2. Mitigation: Using native Yocto Project infrastructure uses the same basic concepts as PetaLinux (meta-layers, recipes, etc) because PetaLinux was a vendor-specific implementation of core Yocto Project tooling. For users new to using native Yocto Project, we recommend you to start with a minimal, board-specific layer (meta-yourboard) on top of meta-xilinx, with clear recipes for bitstream, DTB, and loader; gradually add complexity. Additionally, we highly recommend using the resources available on the AMD wiki for transation between PetaLinux commands/concepts and the native Yocto Project equivalents.

How will FPGA integration (e.g., bitstream loading) be managed in Yocto?  

  • In Yocto Project, bitstream (or PDI/CDO content for Versal) data are included in the build area and can optionally be automatically included in the root filesystem.

  • Loading PL configuration data at runtime can be automated using the dfx-mgr tool. To use this tool, include it in your rootFS by setting the appropriate build configuration options. This tool automates loading and tracking PL payloads with associated device tree overlay data.

  • Managing PL content via Yocto Project keeps the bitstream lifecycle reproducible (via recipes, patches, and a dedicated service) and works across boards by adapting the loader to the target FPGA interface (e.g., Xilinx/Intel FPGA managers).

  • For segmented configurations:
    If users have a segmented configuration they wish to include in their rootfs, they must follow the instructions in meta-xilinx/docs/README.dfx.user.dts.md at rel-v2025.1 · Xilinx/meta-xilinx . This is a somewhat complex procedure, which is documented in detail on the following wiki pages:

How do we migrate PetaLinux rootfs customizations (e.g., init scripts)? 

systemd service recipe (meta-yourboard/recipes-core/systemd/my-service.bb) ex: File.bb 

SUMMARY = "Startup task for my board" DESCRIPTION = "Runs at boot to initialize custom services" AUTHOR = "Your Team" SRC_URI = "file://my-startup-script.sh" S = "${WORKDIR}" do_install() {     install -d ${D}${bindir}     install -m 0755 ${S}/my-startup-script.sh ${D}${bindir}/my-startup-script.sh     install -d ${D}${systemd_unitdir}/system     install -m 0644 ${S}/my-service.service ${D}${systemd_unitdir}/system/ } SYSTEMD_SERVICE_${PN} = "my-service.service" SYSTEMD_AUTO_ENABLE = "1"

Startup script example (meta-yourboard/recipes-core/systemd/files/my-startup-script.sh)  ex: File.sh 

#!/bin/sh echo "Initializing custom startup tasks..." # put your init tasks here exit 0

follow: https://amd.atlassian.net/wiki/spaces/PSSANONECC/pages/1162851516/Petalinux+to+EDF+Migration+guide#Yocto-Example.3

Are there Yocto equivalents for PetaLinux’s `petalinux-config` options?  

Yes, in Yocto you achieve equivalent configuration control to PetaLinux’s petalinux-config by using Yocto’s project-level and target-specific configuration mechanisms

  • Global and machine configuration
    MACHINE, DISTRO, and DISTRO_FEATURES in conf/machine/*.conf and conf/local.conf to select target hardware, features (e.g., systemd, sysvinit, initscripts), and image behavior.
    Example: MACHINE = "versal-yocto"; DISTRO_FEATURES_append = " systemd" to enable systemd.

  • Layer and feature management
    bblayers.conf to enable/disable layers (e.g., meta-xilinx, meta-versal, meta-yourboard).
    IMAGE_FEATURES to toggle features like ssh, package-management, debug-tweaks, etc.

  • Kernel and bootloader customization
    Use recipes-kernel to apply Kconfig options (bitbake -c menuconfig virtual/kernel) or to patch defconfig/kconfig fragments.
    U-Boot options via the same approach (bitbake -c menuconfig u-boot) or patching.
    The “make menuconfig” option inside software repositories that support ncurses-based configuration management (Eg, Linux & U-Boot)

  • Device tree and FPGA flow
    Device-tree fragments (.dtsi) added via a kernel recipe or a dedicated dtb-fragment/*.bbappend file.
    FPGA manager/bitstream integration via patches or dedicated recipes (bitstream.bb, loader.bb) in a custom layer.

follow: https://amd.atlassian.net/wiki/spaces/PSSANONECC/pages/1162851516/Petalinux+to+EDF+Migration+guide#Configuring and https://amd.atlassian.net/wiki/spaces/PSSANONECC/pages/1162851516/Petalinux+to+EDF+Migration+guide#Customizing-Device-Trees

Any official migration guides or meta-xilinx documentation? 

follow: https://amd.atlassian.net/wiki/spaces/PSSANONECC/pages/1162851516

Why is the boot.scr file no longer produced by default for Versal and Versal2 boards, and what key changes does EFI introduce to the boot process after the transition from Distroboot to EFI boot?

This document is relevant only to the 2025.2 release and later as the UEFI boot flow in 2025.1 was based on the Distroboot flow.

AMD Embedded Development Framework (EDF) is intended to be complaint with the ARM System Ready specification which mandates the use of UEFI boot and UEFI based Capsule Update mechanism. As u-boot supports UEFI boot (or, a subset of it sufficient for this purposes), the u-boot configuration has been updated accordingly.

This document outlines the u-boot to Linux kernel boot process using UEFI.

Transition from Distro Boot (boot.scr) to EFI Boot Flow in EDF :

Historically Xilinx/AMD embedded boards have booted using u-boots Distro Boot mechanism. This is implemented by means of a set of functions and variables in the u-boot environment which cycle through a number of boot options searching for a valid boot method. In practice, this usually means finding and executing a boot script (boot.scr) on a boot device. The boot script is then responsible for loading components and running the `booti …` command to hand control to the kernel.

With the move to UEFI boot Distro Boot has been disabled - it is no longer the default boot command. In EDF images, boot scripts are no longer installed in the /boot partition and uboot will not search for them automatically. The Distro Boot functionality (functions and variables) are still available in the environment but would need to be executed from a u-boot command line manually.

EFI System Partition (ESP) and UEFI environment :

Per the UEFI spec, u-boot expects bootable media to contain a dedicated EFI System Partition (ESP) which must be first partition on the disk, must be formatted with fat32 (vfat) and must have a specific partition type in order to be detected correctly. Files stored in the ESP must also conform to a pre-defined directory structure and file naming convention. In practice, the ESP is used in a similar way to the /boot partition was used on Linux systems and contains all binaries and configuration files required for booting, such as the kernel Image, EFI executables, Xen config files and executables and second stange bootloader (systemd-boot/GRUB) executables and config files.

u-boot maintains a UEFI specific environment , separate from its own environment, to store UEFI specific variables and values. You can print the UEFI environment from the u-boot console using print -e or manipulated the UEFI environment with commands such as eficonfig and efidebug. The UEFI environment is not persisted to OSPI as with u-boot’s own environment but is instead saved as a file, named ubootenv.var, to the active ESP.

u-boot will use the first ESP it finds when enumerating devices and there is no way to influence which device it will find first so the behaviour is undefined when mutiple devices are in use

. |-- EFI | `-- BOOT | `-- bootaa64.efi |-- Image |-- loader | |-- entries | | |-- edf-linux.conf | | `-- edf-xen.conf | |-- loader.conf | `-- random-seed |-- ubootefi.var |-- xen.cfg `-- xen.efi

u-boot boot device enumeration and bootmenu

As defined in the UEFI spec, u-boot will enumerate all available boot media and dynamically generate UEFI boot entries to match these devices. These entries are labeled with a specific naming convention; mmc 0, mmc 1, usb 0, usb 1, etc.

Versal> efidebug boot dump Boot0000: attributes: A-- (0x00000001) label: mmc 0 file_path: /VenHw(e61d73b9-a384-4acc-aeab-82e828f3628b,0000000000000000)/VenHw(e61d73b9-a384-4acc-aeab-82e828f3628b,6d00000000000000)/SD(0)/SD(0) data: 00000000: 4e ac 08 81 11 9f 59 4d 85 0e e2 1a 52 2c 59 b2 N.....YM....R,Y.

The enumerated devices are listed in the UEFI variable BootOrder in the order in which they were found. If the bootefi bootmgboot command were used, u-boot would attempt to boot from each of the devices in order by attempting to executing an EFI binary of a specific name (<ESP>/BOOT/EFI/BOOTAA64.EFI) if it is present. This file could be a Kernel, bootloader, shim or any other valid EFI binary, u-boot will simply execute it.

In EDF, the default u-boot boot command is set to bootmenu -e (for Versal or newer), which presents the user with a list of the auto-generated boot entries to choose from and has a short timeout. Depending on the number of devices detected and the specific hardware configuration of the board, the list shown will differ.

*** U-Boot Boot Menu *** mmc 0 usb 0 Exit Press UP/DOWN to move, ENTER to select, ESC to quit

The assignment of devices to names should be deterministic so the same devices should be assigned the same names on subsequent boots but as this process is done dynamically on each boot, the mapping may change if devices are added or removed. In other words, plugging in a second bootable USB key is not guaranteed to be assigned usb 1 as it might be found first by u-boot.

systemd-boot

systemd-boot (formally gummiboot) is a simple UEFI boot manager which has been subsumed into the systemd project and provides additional functionality which is not available to us in u-boot. The default EFI binary installed in the EDF ESP (discussed in the previous section) is systemd-boot along with the required configuration files.

EDF Xen EDF Linux ───────────────────── Boot in 5 s.

systemd-boot confirms to a bootloader configuration files standard which requires a loader directory in the ESP, a top level loader.conf and a single config file per boot option, for example edf-linux.conf and edf-xen.conf

When the selected option is chosen (or the timeout expires) the boot manager will execute the boot configuration in the selected configuration file.

The command line arguments passed to the Kernel or Xen use the partition UUID to identify the root filesystem which will be booted - this UUID is randomly generated during the build and will match in the partition information on the disk and in the bootloader configuration files. In other words, each image will have a different UUID and will allow the bootloader to identify specifically which root filesystem it should use to boot, especially in the scenario where multiple devices are connected concurrently.

Flashing exactly the same image to multiple devices will likely result in undefined behaviour as the bootloader will boot whichever partition it finds first with the matching UUID.

As shown in the menu above, systemd-boot has configuration files for EDF Linux (default) which is a Yocto Linux distribution and EDF Xen which boots the Xen binary and configuration in dom0-less mode before booting the kernel. In practice, the root filesystem is common to both of these boot options with the required Xen tools and configurations installed to be used when booting Xen.

Capsule Update

amd-edf:~$ FWUPD_UEFI_ESP_PATH=/efi sudo -E fwupdtool install uefi-capsule-versal-vek280-sdt-seg-bootfw-firmware.cab
versal> efidebug boot add -b 6 "Capsule Update" mmc 0:1 EFI/UpdateCapsule versal> efidebug boot next 6 versal> efidebug capsule disk-update

 

https://amd.atlassian.net/wiki/spaces/XPS/pages/1016768515


UEFI environment and ubootenv.var

The following is retained for information value but is largely only relevant to the 2025.1 release or if manual changes are being made to the u-boot UEFI environment

The ubootenv.var is read from the root of the EFI System Partition (ESP) and is used to persist changes to the UEFI environment. This file can contain a list of boot options and boot order preference. u-boot will iterate through these boot settings, attempting to boot each in order until one boots. These boot options are usually named BootXXXX where XXXX is a hexadecimal number and the BootOrder setting will define the order in which these will be attempted.

In the 2025.1 release, the Yocto build system pre-generated the ubootefi.var file and installed it in the ESP. As this is very difficult to maintain and updated, as well as issues relating to how u-boot deals with (or doesnt deal with) systems with multiple ESPs, this method of booting the system was removed in favour of using systemd-boot (outlined above). However, the recipe used to build the ubootefi.var file are still available in the code base if required.

Setting U-Boot Arguments

a. Boot Script Method:

Automating the same steps using a boot.scr script:

Step 1: Prepare script source (boot.cmd)

Create a boot.cmd file using the attached content :

Step 2: Generate boot.scr

mkimage -c none -A arm64 -T script -d boot.cmd boot.scr

Step 3: Transfer boot.scr via TFTP

set the tftp path in systest terminal and then load to 0x40000000 via tftp

versal2> tftpb 0x40000000 boot.scr

Step 4: Execute the script

versal2> source 0x40000000

log:

b. EFI Boot Entry via efidebug

stop at u-boot:

Syntax:

versal> efidebug boot add -b <boot_id> "<boot_label>" <device_type> <devnum:part> <kernel_path> -s '<kernel_cmdline>' Versal> efidebug boot next <boot_id> versal> boot
  • <boot_id> – any unused number

  • <boot_label> – descriptive text

  • mmc / usb – depending on your device

  • 0:1 – correct device:partition where ESP/root files live

  • /Image – actual file you want to execute

  • root=PARTUUID=... and other kernel arguments – matching your root filesystem and memory settings.

Example:

Versal> efidebug boot add -b 10 "Direct Kernel" mmc 0:1 /Image -s "root=PARTUUID=45923539-e493-4c18-9a29-5638c8bcb456 ro rootwait earlycon uio_pdrv_genirq.of_id=generic-uio cma=1024M" Versal> efidebug boot order 10 versal2> boot

From logs:

[ 0.000000] Kernel command line: root=PARTUUID=45923539-e493-4c18-9a29-5638c8bcb456 ro rootwait earlycon uio_pdrv_genirq.of_id=generic-uio cma=1024M

log: ,

 

How do we configure EDF 2025.2 to use U-Boot distro boot (boot.scr) instead of UEFI boot ?

Overview

By default, EDF 2025.2 uses UEFI boot with systemd-boot as the boot manager. However, you can configure it to use traditional U-Boot distro boot mechanism with boot.scr script. This FAQ provides step-by-step instructions for this migration.

Key Differences

Feature

UEFI Boot (Default in 2025.2)

Distro Boot

Feature

UEFI Boot (Default in 2025.2)

Distro Boot

Boot Manager

systemd-boot

U-Boot direct

Boot Configuration

EFI boot entries in /efi/loader/entries/

boot.scr script

© 2025 Advanced Micro Devices, Inc. Privacy Policy