Frequently Asked Questions on AMD EDF
Frequency asked questions, and answers
- 1 Does AMD EDF replace the PetaLinux flow and existing AMD Yocto Project™ flows?
- 2 Which device families will be supported by AMD EDF?
- 3 Will the hardware platform used for EDF BSPs be available?
- 4 What is the difference between EDF-based BSPs and PetaLinux BSPs; can the boot components generated using these be interchanged?
- 5 Can existing bootloader, kernel patches, and rootfs customizations from PetaLinux be re-used?
- 6 How will FPGA integration (e.g., bitstream loading) be managed in Yocto?
- 7 How does EDF handle SDK generation for application development?
- 8 How can I integrate Vivado Design Suite generated artifacts (bitstream, DTB) into EDF Yocto Project™ based builds?
- 9 Which PetaLinux features are not available in the vanilla Yocto Project?
- 10 How can I migrate PetaLinux rootfs customizations (e.g., init scripts) into my EDF build?
- 11 Are there Yocto Project layers equivalent to `meta-petalinux` for FPGA workflows?
- 12 What’s the recommended migration path for existing products and projects?
- 13 What is the recommended starting point for new users who want to use EDF?
- 14 How is the lifecycle and release of EDF managed, and how does it align to Vivado Design Suite releases?
- 15 How to include External CDO Files in Final Image Using Bootgen utility?
- 16 What tooling will replace `petalinux-build` workflows?
- 17 Are there features in PetaLinux that are hard to replicate in Yocto?
- 18 How do we validate FPGA-specific features after migration?
- 19 How do we handle Vivado handoff files in Yocto builds?
- 20 Will we need to maintain custom Yocto layers for Xilinx-specific features?
- 21 Are there community or vendor-supported Yocto layers for Xilinx?
- 22 FPGA communication over PCIe, what would you recommend for user space driver?
- 23 Any licensing or support implications?
- 24 What are the known migration myths?
- 25 How will FPGA integration (e.g., bitstream loading) be managed in Yocto?
- 26 How do we migrate PetaLinux rootfs customizations (e.g., init scripts)?
- 27 Are there Yocto equivalents for PetaLinux’s `petalinux-config` options?
- 28 Any official migration guides or meta-xilinx documentation?
- 29 Why is the boot.scr file no longer produced by default for Versal and Versal2 boards, and what key changes does EFI introduce to the boot process after the transition from Distroboot to EFI boot?
- 30 How do we configure EDF 2025.2 to use U-Boot distro boot (boot.scr) instead of UEFI boot ?
- 31 When Should -g full or -g dfx Be Used for FPGA Device Tree Configuration?
- 32 When Is It Recommended to Use the --template Option for a Custom Hardware Machine?
- 33 Why do EDF prebuilt images fail to boot on some VCK190/VMK180 boards, and which board should be selected to boot EDF prebuilt images old or new SD level shifter, and how should boards with the old SD level shifter be handled when using the new level BSP?
- 34 What is the purpose, applicability, usage, customization, and R52 impact of the U‑Boot PREBOOT CMN workaround?
Does AMD EDF replace the PetaLinux flow and existing AMD Yocto Project™ flows?
Yes, AMD EDF is an evolution of AMD Yocto Project™ flows, and replaces the PetaLinux tooling and flows.
Which device families will be supported by AMD EDF?
All device families are supported, except AMD MicroBlaze™ processors (classic).
Will the hardware platform used for EDF BSPs be available?
Yes, the hardware platforms are available for download from Index of / and is implemented as an AMD Vivado™ Design Suite Configurable Example Design with additional TCL scripted additions. See the Common Specifications section of the EDF Documentation for more details.
What is the difference between EDF-based BSPs and PetaLinux BSPs; can the boot components generated using these be interchanged?
EDF Linux® BSP provide similar functionality to a PetaLinux BSP – Boot firmware, bootloaders, Linux OS, and Linux Root file system. However, the Kernel Configuration and rootfs are richer. AMD recommends re-generating the full stack within EDF to ensure compatibility.
Can existing bootloader, kernel patches, and rootfs customizations from PetaLinux be re-used?
Existing patches can be re-used if compatible with the version of the kernel / components. RootFS customizations can be migrated using the Yocto Project methods for extending the rootFS configuration. See the development flows section of the EDF Documentation for more information.
How will FPGA integration (e.g., bitstream loading) be managed in Yocto?
Existing standard flows for FPGA bitstream loading are supported in EDF, FPGA Util, and DFX. Manager utilities are included in the default EDF Linux® disk image.
How does EDF handle SDK generation for application development?
An application SDK is released with each EDF release, however, it is recommended to generate your own SDK from within the EDF Yocto Project environment to ensure the SDK is in line with any user customizations. See the Operating System Integration sections of the EDF Documentation for more information.
How can I integrate Vivado Design Suite generated artifacts (bitstream, DTB) into EDF Yocto Project™ based builds?
Existing flows are supported, with build recipe templates and helper functions available for creating firmware bundles for integration into the EDF Yocto Project based images, or can also be created manually. For more information, see the Hardware Application development, Application Deployment, and Segmented Configuration or Dynamic Function eXchange sections of the EDF documentation.
Which PetaLinux features are not available in the vanilla Yocto Project?
AMD device-specific functionality and drivers may not be available in vanilla Yocto and the upstream Linux Kernel, but are included in AMD EDF via custom layers (similar to PetaLinux). AMD is committed to upstreaming, but there is generally a delay between initial release of a feature and its availably in upstream repos.
How can I migrate PetaLinux rootfs customizations (e.g., init scripts) into my EDF build?
See the Editing the RootFS configuration in the EDF or Yocto Project documentation.
Are there Yocto Project layers equivalent to `meta-petalinux` for FPGA workflows?
Yes, meta-amd-edf and other custom meta-layers provide this functionality. These are all included in the AMD EDF Yocto Project environment: $ repo init -u
GitHub - Xilinx/yocto-manifests: Repo Manifests for the Yocto Project Build System -b refs/tags/<AMD-EDF release tag> -m default-edf.xml See the Downloads and Release notes, and Operating System Integration sections of the EDF documentation for more information.
What’s the recommended migration path for existing products and projects?
See the Migrating a Project from PetaLinux to EDF section of the EDF documentation for more information.
What is the recommended starting point for new users who want to use EDF?
We recommend the Getting Started – Walk through examples and Development Flows sections of the EDF documentation, with Common, Device, and board specific and related available in the relevant sections of the EDF documentation.
How is the lifecycle and release of EDF managed, and how does it align to Vivado Design Suite releases?
AMD EDF currently releases on the same timeline as Vivado Design Suite releases, but as a separate component. EDF is validated against the Vivado Design Suite release, with pre-compiled artifacts updated. EDF release naming uses a YY.MM.<update> scheme. For example, EDF 25.05 aligns with the Vivado Design Suite 2025.1 release, and EDF 25.05.1 aligns with the Vivado Design Suite 2025.1.1 release.
How to include External CDO Files in Final Image Using Bootgen utility?
What tooling will replace `petalinux-build` workflows?
Bitbake + Gen-Machine-Conf + SDT -
Software Hardware Exchange Loop (SHEL) Flow
system-device-tree-xlnx/README.md at master · Xilinx/system-device-tree-xlnx
There are some adjustments with 2025.2 with gen-machine-conf installing yaml files for configuration for builds:
Use a dedicated Yocto layer stack (for Xilinx targets) such as:
meta-xilinx
meta-xilinx-tools
meta-amd-adaptive-socs and meta-xilinx-restricted (board-specific layers)
Replace PetaLinux’s hardware flow with:
SDT/XSA integration via Yocto layers and device-tree fragments
FPGA Manager integration through kernel config and DTB fragments
Boot sequence managed by U-Boot in Yocto (instead of PetaLinux boot scripts)
Are there features in PetaLinux that are hard to replicate in Yocto?
Yes, there are some PetaLinux features that are notably hard to replicate directly in Yocto Project. The main differences arise from tightly integrated hardware/design handoff, boot flow automation, and vendor-optimized tooling that PetaLinux provides out of the box.
Integrated hardware design and flow
What’s Different: PetaLinux ships with a tightly coupled Vivado/ export (XSA/SDT) and automated DTB handling, enabling a near-seamless path from hardware design to software. This flow is replicated using the SHEL flow (SDTGen, Lopper, gen-machine-conf) and direct cloning of the Yocto Project resource (eg, using the repo tool).
GUI-driven tooling and workflow
What’s Different: PetaLinux offers an ncurses-based console GUI for project creation, configuration, and workflow orchestration. Yocto is primarily CLI-driven with configuration managed with text-based configuration files. Some individual components (eg, Linux or U-Boot) can still be managed with the same ncurses-based command line GUI by using the “make menuconfig” command inside the code source tree.
How do we validate FPGA-specific features after migration?
Advised Approach:
Confirm bitstream (or PDI/CDO data) loads reliably at boot or on demand.
Verify DTB bindings and FPGA peripherals are visible to the kernel.
Ensure FPGA-enabled applications run correctly and meet performance targets.
Establish repeatable pass/fail criteria and a rollback plan if validation fails.
This tends to be designer preference, but here is a suggested, high-level test plan:
Test: Boot-time bitstream load
Steps: Boot image; loader runs; verify bitstream loaded flag in logs.
Pass criteria: bitstream load reported as successful within N seconds; FPGA peripheral accessible.
Test: Bitstream integrity
Steps: Load bitstream; read version/hash from a known register or file.
Pass criteria: hash matches expected value; version matches the intended design.
Test: FPGA peripheral accessibility
Steps: Run a small kernel/user-space test that exercises a DMA or accelerator.
Pass criteria: data transfers complete with expected results; no crashes.
Test: DTB correctness
Steps: Inspect kernel logs for FPFA/DTB bindings; enumerate devices from /dev/mmaps or sysfs.
Pass criteria: all expected devices appear and are functional.
How do we handle Vivado handoff files in Yocto builds?
Export hardware from Vivado
Complete your Vivado design and export the hardware package as an .xsa file.
How: In Vivado, use File > Export > Export Hardware or run the write_hw_platform Tcl command.
What you get: An .xsa that contains the bitstream, hardware description, and metadata needed for software integration.
Generate SDT artifacts from the .xsa
Use the .xsa to generate the System Device Tree (SDT) artifacts directory.
Tool: sdtgen (System Device Tree generator) creates the kernel/bootloader-ready device-tree pieces.
Result: A directory with the device tree (and related files) tailored to your hardware.
Create a Yocto machine configuration from SDT
Produce a Yocto machine config from the SDT artifacts.
Tool: gen-_machine-conf transforms the SDT outputs into a reusable Yocto machine setup.
Result: A machine configuration that reflects your specific hardware (SoC, peripherals, and bus topology).
Integrate SDT into the Yocto build
10. Add the generated machine configuration to your Yocto project.
11. Actions:
Include the new machine in bblayers.conf (as needed).
Create or adjust layers/recipes to incorporate the SDT-derived device-tree and hardware specifics.
12. Ensure the configuration is consistent with your kernel, bootloader, and FPGA flow.
Build the Yocto image
Build with BitBake using the new machine configuration.
Command: bitbake [your-machine-name]
Outcome: A kernel, bootloader, and root filesystem that integrate the Vivado hardware description from the .xsa.
follow for more:Custom Hardware Development AMD Versal™ device portfolio and Custom Hardware Development AMD ZynqMP™ device portfolio
Will we need to maintain custom Yocto layers for Xilinx-specific features?
Yes, flexibility of maintenance for custom Yocto layers for AMD AECG-specific features, especially if you use board-specific BSPs, FPGA integration, and boot flows – is a key benefit for users because Yocto Project workflows are based on open source infrastructure and tooling.
Base and augmentation layers
Start from Xilinx-provided layers (e.g., meta-xilinx, meta-xilinx-tools, and board-specific layers like meta-amd-adaptive-socs or meta-xilinx-restricted) as the foundation.
Create a dedicated board-layer (e.g., meta-yourboard) that contains machine config, custom kernel/boot patches, DT fragments, and FPGA/loaders. This keeps hardware-specific changes isolated and reusable.
What to put in your custom layer
Machine configurations and distro-feature tweaks (can be automated with gen-machine-conf)
Patches or defconfig changes for the kernel and U-Boot
Device-tree fragments and any FPGA manager/bitstream handling
Bitstream/DTB packaging recipes and the FPGA loader (kernel module or systemd service)
Any startup scripts, userspace utilities, or SDK customizations tied to your hardware
Maintenance approach
Align your custom layer with Yocto release cycles and the versions of the AMD AECG layers you depend on Version your hardware changes (bitstreams, XSA/SDT, DT bindings) and tag them with the target MACHINE
Use CI to validate image builds across updates to kernel, bootloaders, and FPGA-related recipes
Risks and mitigations
Layer compatibility: pin to compatible versions of meta-xilinx and the kernel to avoid conflicts
Rebase effort: plan for periodic rebases when OEM layers update, and automate patch application
Licensing: track licenses for vendor layers and ensure redistribution compliance in your layer
Are there community or vendor-supported Yocto layers for Xilinx?
Yes. There are both vendor-supported and community-supported Yocto layers for Xilinx targets.
Vendor-supported layers
meta-xilinx: Core Xilinx BSPs, kernels, bootloaders, and basic FPGA/DT work for Zynq/ZynqMP/Versal.
meta-xilinx-tools: Build tooling and utilities used with Xilinx layers.
Board-specific layers (e.g., meta-amd-adaptive-socs and meta-xilinx-restricted): Targeted support for particular boards and SoCs, often with aligned boot flows and DT changes.
These layers are typically maintained by Xilinx or official partners and come with vendor support options.
Community-supported layers
Often built on top of meta-xilinx and provide additional recipes, patches, or boards not covered by the official layers.
Maintained by the open-source community or third-party contributors; update cadence can vary.
FPGA communication over PCIe, what would you recommend for user space driver?
The recommendation on the device driver largely depends on the system architecture and the kind of DMA (if any) that is being used on the FPGA side.
Assuming a system topology where the x86 is the host and the FPGA is the target (or “card” side), and assuming a higher performance data path where a DMA is required, the FPGA DMA type is typically either QDMA (hard or soft) or XDMA (“DMA/Bridge Subsystem for PCIe”).
Some further info and the associated host-side drivers for these DMA IPs:
QDMA IP (hard or soft)
Linux kernel driver (KMOD + user utils) for interrupt-driven, standard kernel flow, sysfs control, etc. Good general default. See QDMA drivers
DPDK Poll-Mode Driver (PMD) for user-space, polled, very-high-throughput/low-latency data paths that is compatible with the Data Plane Development Kit (DPDK); uses UIO/VFIO underneath. Choose this when you want kernel-bypass style dataplane performance. QDMA DPDK Driver Info
Notes: QDMA supports multi-queue, MSI-X, and SR-IOV (PF/VF queue ownership & mailbox). If you need virtualization/tenant isolation, confirm SR-IOV is enabled in the IP and match with appropriate PF/VF drivers. See SR-IOV Support in QDMA
XDMA IP (“DMA/Bridge Subsystem for PCIe”)
XDMA Linux kernel driver (character-device interface) from AMD’s dma_ip_drivers repo; historically the common choice, but XDMA is becoming less common, particularly in Versal (for which many devices integrated a hardened QDMA, or choose soft QDMA for performance reasons). For XDMA, see: See AMD PG195
Upstreamed Linux XDMA driver (dmaengine) is now in mainline kernels (no char-dev ABI; uses the dmaengine framework). Choose this if you want an upstream-friendly path and you’re on a new enough kernel. Linux Kernel Mailing List citation
In recent technologies like Versal, QDMA has notable advantages in areas like hardened support (e.g., via the CPM5/QDMA Versal hard block), robust streaming and memory-mapped support with good Linux driver stability (e.g., AXI-MM), recent performance characterization with DPDK and generally more complete documentation.
For simpler or lower-performance PCIe applications that might not require a DMA at all, or require only simple DMA usage that is not QDMA or XDMA, such as in control plane applications, there are some alternative approaches.
For control path, UIO is often the best choice for simple devices (e.g., memory-mapped I/O, polling-based communication).
Advanced features like DMA, interrupt handling, and complex synchronization are sometimes harder or impossible to implement in user space, and therefore a customer Linux kernel driver should be considered. One such example where one might implement this is a PCIe-based driver for an I/O expansion FPGA where multiple I2C or SPI devices needs to be controlled on the target FPGA side over PCIe. In these cases, the FPGA-side Linux device drivers can often serve as a starting point for integrating the required functionality into a host-side PCIe driver.
Any licensing or support implications?
When using Yocto, there are generally no licensing or support implications, as Yocto is an open-source project. It is non-proprietary, meaning it can be freely used without concerns about licensing restrictions.
For installation tooling, refer to the EDF release notes, which provide guidance on setting up the necessary tools. You can find more information and download resources for embedded design tools at the following link:
Downloads
Additionally, AMD supports the automated generation of Software Bill of Materials (SBoM) and CVE (Common Vulnerabilities and Exposures) report creation. This is achieved using standard community tooling that integrates seamlessly with the Yocto Project.
What are the known migration myths?
This is a big change from before. Here are some common myths and how to mitigate them
Top 3:
Format:
Myth: Shifting from menuconfig-driven PetaLinux workflows to command-line, recipe-driven Yocto workflows is harder.
Mitigation: Yocto Project industry standard has a wealth of information available for – from the Yocto Mega-Manual on the Yocto Project documentation page to YouTube videos, and many industry consultants. We recommend investing in Yocto training, creatinge starter templates that can be reused, and reference the documentation mapping PetaLinux concepts to Yocto equivalents.
Debug:
Myth: Tracing boot issues, bitstream loading, and kernel-device-tree interactions can be harder without PetaLinux tooling.
Mitigation: Yocto Project provides a wealth of automation and debug tooling. Many debug or loading features that PetaLinux provided were actually wrappers around Yocto Project equivalents.
Build System Complexity:
Myth: Learning and harmonizing BitBake terminology, layer management, and patch workflows can slow initial adoption.
Mitigation: Using native Yocto Project infrastructure uses the same basic concepts as PetaLinux (meta-layers, recipes, etc) because PetaLinux was a vendor-specific implementation of core Yocto Project tooling. For users new to using native Yocto Project, we recommend you to start with a minimal, board-specific layer (meta-yourboard) on top of meta-xilinx, with clear recipes for bitstream, DTB, and loader; gradually add complexity. Additionally, we highly recommend using the resources available on the AMD wiki for transation between PetaLinux commands/concepts and the native Yocto Project equivalents.
How will FPGA integration (e.g., bitstream loading) be managed in Yocto?
In Yocto Project, bitstream (or PDI/CDO content for Versal) data are included in the build area and can optionally be automatically included in the root filesystem.
Loading PL configuration data at runtime can be automated using the dfx-mgr tool. To use this tool, include it in your rootFS by setting the appropriate build configuration options. This tool automates loading and tracking PL payloads with associated device tree overlay data.
Managing PL content via Yocto Project keeps the bitstream lifecycle reproducible (via recipes, patches, and a dedicated service) and works across boards by adapting the loader to the target FPGA interface (e.g., Xilinx/Intel FPGA managers).
For segmented configurations:
If users have a segmented configuration they wish to include in their rootfs, they must follow the instructions in
meta-xilinx/docs/README.dfx.user.dts.md at rel-v2025.1 · Xilinx/meta-xilinx . This is a somewhat complex procedure, which is documented in detail on the following wiki pages:
How do we migrate PetaLinux rootfs customizations (e.g., init scripts)?
systemd service recipe (meta-yourboard/recipes-core/systemd/my-service.bb) ex: File.bb
SUMMARY = "Startup task for my board"
DESCRIPTION = "Runs at boot to initialize custom services"
AUTHOR = "Your Team"
SRC_URI = "file://my-startup-script.sh"
S = "${WORKDIR}"
do_install() {
install -d ${D}${bindir}
install -m 0755 ${S}/my-startup-script.sh ${D}${bindir}/my-startup-script.sh
install -d ${D}${systemd_unitdir}/system
install -m 0644 ${S}/my-service.service ${D}${systemd_unitdir}/system/
}
SYSTEMD_SERVICE_${PN} = "my-service.service"
SYSTEMD_AUTO_ENABLE = "1" Startup script example (meta-yourboard/recipes-core/systemd/files/my-startup-script.sh) ex: File.sh
#!/bin/sh
echo "Initializing custom startup tasks..."
# put your init tasks here
exit 0 Are there Yocto equivalents for PetaLinux’s `petalinux-config` options?
Yes, in Yocto you achieve equivalent configuration control to PetaLinux’s petalinux-config by using Yocto’s project-level and target-specific configuration mechanisms
Global and machine configuration
MACHINE, DISTRO, and DISTRO_FEATURES in conf/machine/*.conf and conf/local.conf to select target hardware, features (e.g., systemd, sysvinit, initscripts), and image behavior.
Example: MACHINE = "versal-yocto"; DISTRO_FEATURES_append = " systemd" to enable systemd.Layer and feature management
bblayers.conf to enable/disable layers (e.g., meta-xilinx, meta-versal, meta-yourboard).
IMAGE_FEATURES to toggle features like ssh, package-management, debug-tweaks, etc.Kernel and bootloader customization
Use recipes-kernel to apply Kconfig options (bitbake -c menuconfig virtual/kernel) or to patch defconfig/kconfig fragments.
U-Boot options via the same approach (bitbake -c menuconfig u-boot) or patching.
The “make menuconfig” option inside software repositories that support ncurses-based configuration management (Eg, Linux & U-Boot)Device tree and FPGA flow
Device-tree fragments (.dtsi) added via a kernel recipe or a dedicated dtb-fragment/*.bbappend file.
FPGA manager/bitstream integration via patches or dedicated recipes (bitstream.bb, loader.bb) in a custom layer.
follow: https://amd.atlassian.net/wiki/spaces/PSSANONECC/pages/1162851516/Petalinux+to+EDF+Migration+guide#Configuring and https://amd.atlassian.net/wiki/spaces/PSSANONECC/pages/1162851516/Petalinux+to+EDF+Migration+guide#Customizing-Device-Trees
Any official migration guides or meta-xilinx documentation?
follow: https://amd.atlassian.net/wiki/spaces/PSSANONECC/pages/1162851516
Why is the boot.scr file no longer produced by default for Versal and Versal2 boards, and what key changes does EFI introduce to the boot process after the transition from Distroboot to EFI boot?
This document is relevant only to the 2025.2 release and later as the UEFI boot flow in 2025.1 was based on the Distroboot flow.
How do we configure EDF 2025.2 to use U-Boot distro boot (boot.scr) instead of UEFI boot ?
© 2025 Advanced Micro Devices, Inc. Privacy Policy