Operating System Integration and Development AMD Versal™ device portfolio
This page is part of a getting started guide providing walk through style examples using the AMD Embedded Development Framework (EDF). This section covers Operating System Integration and Development - The full software build flow. For other persona and tasks, see the parent pages.
Some of the prompts on this page use exact steps for specific boards. If your specific board is not mentioned you can look at the following information in order to verify your board is supported for the flow described here, and how to modify the examples for your specific board: Common Specifications | EDF prebuilt Yocto machine definitions
Table of Contents
- 1 Operating System Integration and Development
- 1.1 Yocto Project build setup instructions for EDF
- 1.2 EDF Linux® disk image build using a prebuilt Yocto Project machine and pre-built Vivado artifacts (PDI) - Multi-stage boot (QSPI / OSPI → SD / UFS)
- 1.3 EDF Linux® disk image build using a prebuilt Yocto Project machine and pre-built Vivado artifacts (PDI) - SDCARD boot
- 1.4 Building Images for QEMU
- 1.5 Build and Deploy Zephyr RTOS DomU Guest OS Image
- 1.6 Building an SDK for application cross-development
- 2 Related Links
- 3 Trademarks
Operating System Integration and Development
The operating system (OS) developer creates custom OS images based on application or system requirements. In the simplest case, it is a plain Linux® OS image, in more complex scenarios this can involve hypervisors, containers, multiple OSes based on processing domains (for example, RTOSes and bare-metal components) etc.
It also entails more low-level boot components such as PMU, PLM and PSM firmware (AMD-specific), U-Boot, ARM TF-A, OPTEE etc. The Yocto Project™ provides a build environment that allows users to create custom, complex, heterogenous boot and OS images.
Note: at this stage, basic to advanced Yocto knowledge is required depending on the task you want to perform.
Yocto Project build setup instructions for EDF
This section describes how to get your build host ready to work with EDF meta layers and the required upstream layers.
Pre-requisites: See Preparing Build Host documentation.
Configure the git settings before you run the repo commands.
$ git config --global user.email "you@example.com" $ git config --global user.name "Your Name"Make sure the build host shell is bash not csh or dash.
A basic understanding of the Yocto build system is assumed - please consult the documentation for further information https://docs.yoctoproject.org/
Download and install the Repo tool (if it was not installed in a previous step) Note: If you have repo installed through a package manager, that should be removed first as it is likely out of date and will cause issues. Note: Curl is used to install repo, this may need to be installed your system via the package feed for your Operating System (apt-get, yum, dnf, etc).
$ curl https://storage.googleapis.com/git-repo-downloads/repo > repo $ chmod a+x repo $ echo "Create a user specific ~/bin directory if one does not exist" $ mkdir ~/bin $ mv repo ~/bin/ $ PATH=~/bin:$PATH $ repo --helpInitialize a Repo client.
Create the edf project directory.
$ mkdir -p yocto/edf $ cd yocto/edfClone the Yocto meta layer source using the yocto manifest as show below. A successful initialization will end with a message stating that Repo is initialized in your working directory. Your directory should now contain a .repo directory where repo control files such as the manifest are stored. However you should not need to touch this directory. To learn more about Repo, see
Repo command reference | Android Open Source Project
$ repo init -u https://github.com/Xilinx/yocto-manifests.git -b rel-v2025.2 -m default-edf.xml
Use the latest available version of EDF when calling repo init
See Downloads and Release Notes for detail on released versions.
Fetch all of the repositories.
$ repo syncStart a branch for development starting from the revision specified in the manifest. This is an optional step.
$ repo start <branch_name> --allInitialize a build environment using the edf-init-build-env script. Once the environment is initialized, bblayers.conf and local.conf are set from the meta-amd-edf layer default templates.
$ source edf-init-build-envIf you are building without network access, follow the below steps (otherwise skip this step):
Extract the downloads and sstate tarball files to the ${TOPDIR}/build directory:
$ tar -xf downloads.tar.gz -C <path-to>/yocto/edf/build $ tar -xf sstate-cache.tar.gz -C <path-to>/yocto/edf/buildAdjust the download (SOURCE_MIRROR_URL) and sstate (SSTATE_MIRRORS) mirrors variables by editing build/conf/local.conf to point to local paths as shown below.
Note:INHERIT += "own-mirrors"is already included in the local.conf file.SOURCE_MIRROR_URL = "file:///<path-to>/yocto/edf/build/downloads" SSTATE_MIRRORS = "\ file://.* file:///<path-to>/yocto/edf/build/sstate-cache/PATH \n \ "Set the BB_NO_NETWORK variable in build/conf/local.conf
BB_NO_NETWORK = "1" PREMIRRORS:append = ""
For the NFS build host system, modify the build/conf/local.conf and add the TMPDIR path as shown below. On local storage, $TMPDIR will be set to build/tmp.
TMPDIR = "/tmp/<user-name>/yocto/release_version/build":wq!Depending on the build configuration, several 10s of GB of storage space might be required in the TMPDIR. Make sure you have plenty of storage space available. Alternatively, to save disk space, you can add the below to your build/conf/local.conf. This option removes the work dir after build and only keeps the logs.
INHERIT += "rm_work"In the following sections we will use the build environment set up in this section.
TMPDIR = "/tmp/<user-name>/yocto/release_version/build":wq!
EDF Linux® disk image build using a prebuilt Yocto Project machine and pre-built Vivado artifacts (PDI) - Multi-stage boot (QSPI / OSPI → SD / UFS)
Boot Architecture Common Specifications | Boot Architecture for AMD Evaluation boards
Evaluation boards and default boot modes AMD Embedded Development Framework (EDF) | Packaged BSP for AMD evaluation boards
Build Yocto Project pre-requisites by following Yocto Project build setup instructions for EDF
Build the EDF boot firmware (OSPI image)
$ MACHINE=versal-2ve-2vm-vek385-sdt-seg bitbake edf-ospiBuild the EDF Linux disk image (wic) containing general purpose (GP) Linux
$ MACHINE=amd-cortexa78-mali-common bitbake edf-linux-disk-imageOn successful build, the images for the target machine will be available in the relevant output directory ${TMPDIR}/deploy/images/${MACHINE}/
$ ls -la tmp/deploy/images/versal-2ve-2vm-vek385-sdt-seg/ $ ls -la tmp/deploy/images/amd-cortexa78-mali-common/Flash the OSPI ${TMPDIR}/deploy/images/versal-2ve-2vm-vek385-sdt-seg/edf-ospi-versal-2ve-2vm-vek385-sdt-seg-<timestamp>.bin image file by following Development Flows - How it all works | Creating a image for multi stage boot (Primary Boot Device → Secondary Boot dev...
Flash the ${TMPDIR}/deploy/images/amd-cortexa78-mali-common/edf-linux-disk-image-amd-cortexa78-mali-common.rootfs-<timestamp>.wic.xz to SD card by following Development Flows - How it all works | Creating a bootable SD CARD image (Single stage boot : SD / eMMC / USB / UFS) ...
Boot the VEK385 board from OSPI boot mode to boot GP Linux
Upon first boot, Linux requires a password to be set for the default user account.
Default username 'amd-edf'
EDF Linux® disk image build using a prebuilt Yocto Project machine and pre-built Vivado artifacts (PDI) - SDCARD boot
Boot Architecture Common Specifications | Boot Architecture for AMD Evaluation boards
Evaluation boards and default boot modes AMD Embedded Development Framework (EDF) | Packaged BSP for AMD evaluation boards
Build Yocto pre-requisites by following Operating System Integration and Development AMD Versal™ device portfolio | Yocto Project build setup instructions for EDF
Build the Boot.bin image for the board.
See Common Specifications | EDF prebuilt Yocto machine definitions for the list of pre-built machines that support generating a boot.bin file.
xilinx-bootbin-recipe
# Syntax: $ MACHINE=<machine-name> bitbake xilinx-bootbin # Example Usage for VEK280: $ MACHINE=versal-vek280-sdt-seg bitbake xilinx-bootbin # Example Usage for VCK190: $ MACHINE=versal-vck190-sdt-seg bitbake xilinx-bootbin
Build EDF Common disk images containing EDF Linux® OS (wic).
See https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3250586143/Common+Specifications#EDF-Common-disk-images-containing-EDF-Linux%C2%AE-OS----Yocto-machine-definitions-and-supported-recipes for Yocto machine definitions and supported recipes
An example is shown below for amd-cortexa72-common (Versal family)
Note: Versal-Net family is not supported in 2025.2.
linux-image-recipe
# Syntax: $ MACHINE=<machine-name> bitbake edf-linux-disk-image #Example Usage for versal-gp $ MACHINE=amd-cortexa72-common bitbake edf-linux-disk-image
Combine the xilinx-bootbin and EDF Common disk images to make single SD card image (boot.bin, kernel, rootfs, boot script), by adding the boot.bin to the EFI partition in the WIC Image (Partition 0 = index 1 in the WIC)
The WIC tools are available from within the Yocto build environment, but you may need to install additional packages (not included in the Yocto setup)
wic uses mcopy which is provided by the mtools package on Canonical Ubuntu - sudo apt install mtools
#help for wic
$ wic help
#list the partition map of the .wic file
$ $ wic ls tmp/deploy/images/amd-cortexa72-common/edf-linux-disk-image-amd-cortexa72-common.rootfs.wic
Num Start End Size Fstype
1 32768 1073774591 1073741824 fat32
2 1073774592 2147516415 1073741824 fat32
3 2147516416 8589967359 6442450944 ext4
#show the content of the first partition
$ wic ls tmp/deploy/images/amd-cortexa72-common/edf-linux-disk-image-amd-cortexa72-common.rootfs.wic:1
Volume in drive : is esp
Volume Serial Number is 8573-CFCD
Directory for ::/
efi <DIR> 2011-04-05 23:00
image 32381440 2011-04-05 23:00 Image
loader <DIR> 2011-04-05 23:00
xen cfg 254 2011-04-05 23:00
xen efi 1180456 2011-04-05 23:00
5 files 33 562 150 bytes
1 037 914 112 bytes free
# Copy the boot.bin into the efi partition of the wic image
$ wic cp tmp/deploy/images/versal-vek280-sdt-seg/boot.bin tmp/deploy/images/amd-cortexa72-common/edf-linux-disk-image-amd-cortexa72-common.rootfs.wic:1
#Check the content of the wic
$ wic ls tmp/deploy/images/amd-cortexa72-common/edf-linux-disk-image-amd-cortexa72-common.rootfs.wic:1
Volume in drive : is esp
Volume Serial Number is 8573-CFCD
Directory for ::/
efi <DIR> 2011-04-05 23:00
image 32381440 2011-04-05 23:00 Image
loader <DIR> 2011-04-05 23:00
xen cfg 254 2011-04-05 23:00
xen efi 1180456 2011-04-05 23:00
boot bin 3031696 2025-11-20 15:25
6 files 36 593 846 bytes
1 034 878 976 bytes freeFlashing the SD card
The generated image now needs to be flashed to the SD card.
Follow the instructions from here: https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3258155011/Discovery+and+Evaluation+AMD+Versal+Device+Portfolio#Writing-the-EDF-Linux%C2%AE-disk-image-(wic)-to-the-secondary-boot-media-%3A-SD-card
Once the flashing completes, safely remove the SD card and insert it into the SD card slot of the board.
Boot the board as described in https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3258155011/Discovery+and+Evaluation+AMD+Versal+Device+Portfolio#How-to-boot-a-board-using-the-pre-built-Images-%3A-Single-stage-boot-SD-mode---Setup
WARNING: Booting multiple image with the same root file system partition UUID may result in boot issues. In EDF, the Linux kernel locates the correct root filesystem by passing the root=PARTUUID=XXXX parameter in the kernel command line.
In EDF 2025.05 the PARTUUID parameter is hard-coded meaning that it is not possible to have multiple image connected to a board - for example, an SD card and USB key with the same image. This may result in undetermined behavior or boot errors as uboot and the linux kernel may enumerate the devices differently and the kernel will mount the first partition it finds with the matching PARTUUID.
In EDF 2025.11 the PARTUUID is randomly generated for each image (build) so it is possible to have different images on different media and have the system boot correctly. It is still not possible to have the same image on multiple media.
Building Images for QEMU
Set up the build environment and Yocto Project™ pre-requisites by following Yocto Project build setup instructions for EDF
Build the Boot.bin image.
See Common Specifications | EDF prebuilt Yocto machine definitions for the list of pre-built machines that support generating a boot.bin file.
xilinx-bootbin-recipe
# Syntax: $ MACHINE=<machine-name> bitbake edf-ospi # Example Usage for VEK385: $ MACHINE=versal-2ve-2vm-vek385-sdt-seg edf-ospi
Build the qemu-helper-native package to set up QEMU network tap devices
qemu-helper-native (for VEK385):
# Syntax:
$ MACHINE=<machine-name> bitbake qemu-helper-native
# Example Usage for VEK385:
$ MACHINE=amd-cortexa78-mali-common bitbake qemu-helper-nativea. If user does not have sudo permissions on build host you can skip step 2 and 3.
b. Use the slirp option if you do not have sudo permissions and tap devices are not enabled on your build host.
Manually configure a tap interface for your build system. As root run <path-to>/sources/poky/scripts/runqemu-gen-tapdevs, which should generate a list of tap devices. Once the tap interfaces are successfully created, you should be able to see all of the interfaces by running the ifconfig or ip a command.
$ sudo ./<path-to-layer>/poky/scripts/runqemu-gen-tapdevs $(id -u $USER) $(id -g $USER) 4Combine the xilinx-bootbin and qemu helper native images to make single wic.qemu-sd (boot.bin, kernel, rootfs, boot script), by adding the boot.bin to the EFI partition in the WIC Image (Partition 0 = index 1 in the WIC)
The WIC tools are available from within the Yocto build environment, but you may need to install additional packages (not included in the Yocto setup)
wic uses mcopy which is provided by the mtools package on Canonical Ubuntu - sudo apt install mtools
#help for wic
$ wic help
#list the partition map of the .wic file
$ wic ls tmp/deploy/images/amd-cortexa78-mali-common/edf-linux-disk-image-amd-cortexa78-mali-common.rootfs.wic.qemu-sd
Num Start End Size Fstype
1 32768 1073774591 1073741824 fat32
2 1073774592 2147516415 1073741824 fat32
3 2147516416 8589967359 6442450944 ext4
#show the content of the first partition
$ wic ls tmp/deploy/images/amd-cortexa78-mali-common/edf-linux-disk-image-amd-cortexa78-mali-common.rootfs.wic.qemu-sd:1
Volume in drive : is esp
Volume Serial Number is 54BB-997E
Directory for ::/
efi <DIR> 2011-04-05 23:00
image 32379392 2011-04-05 23:00 Image
loader <DIR> 2011-04-05 23:00
xen cfg 254 2011-04-05 23:00
xen efi 1180456 2011-04-05 23:00
5 files 33 560 102 bytes
1 037 914 112 bytes free
# Copy the boot.bin into the efi partition of the wic image
$ wic cp \
tmp/deploy/images/versal-2ve-2vm-vek385-sdt-seg/boot.bin \
tmp/deploy/images/amd-cortexa78-mali-common/edf-linux-disk-image-amd-cortexa78-mali-common.rootfs.wic.qemu-sd:1
#Check the content of the wic
$ wic ls tmp/deploy/images/amd-cortexa78-mali-common/edf-linux-disk-image-amd-cortexa78-mali-common.rootfs.wic.qemu-sd:1
Volume in drive : is esp
Volume Serial Number is 54BB-997E
Directory for ::/
efi <DIR> 2011-04-05 23:00
image 32379392 2011-04-05 23:00 Image
loader <DIR> 2011-04-05 23:00
xen cfg 254 2011-04-05 23:00
xen efi 1180456 2011-04-05 23:00
boot bin 5021072 2025-11-20 15:34
6 files 38 581 174 bytes
1 032 892 416 bytes freeNow you can simulate the image using QEMU emulator.
Note: To terminate QEMU, usually the command is <Ctrl+A> followed by <X>
Create a directory under ${DEPLOY_DIR}/images (tmp/deploy/images) and copy the versal-2ve-2vm-vek385-sdt-seg board and amd-cortexa78-mali-common artifacts to this new directory.
$ mkdir -p tmp/deploy/images/vek385-qemu $ cp -r tmp/deploy/images/versal-2ve-2vm-vek385-sdt-seg/* tmp/deploy/images/vek385-qemu $ cp -r tmp/deploy/images/amd-cortexa78-mali-common/* tmp/deploy/images/vek385-qemuMerge the versal-2ve-2vm-vek385-sdt-seg and amd-cortexa78-mali-common qemuboot files and generate a new combined vek385-combined.qemuboot.conf file using
qemuboot-tool.$ ./<path-to>/yocto/edf/sources/meta-xilinx/meta-xilinx-core/scripts/qemuboot-tool \ load tmp/deploy/images/vek385-qemu/BOOT-versal-2ve-2vm-vek385-sdt-seg.qemuboot.conf \ remove image_link_name \ remove image_name \ merge tmp/deploy/images/vek385-qemu/edf-linux-disk-image-amd-cortexa78-mali-common.rootfs.qemuboot.conf \ > tmp/deploy/images/vek385-qemu/vek385-combined.qemuboot.confOn successfully creating the combined vek385-combined.qemuboot.conf file, the contents should look similar to the following:
vek385-combined.qemuboot.conf:
$ cat tmp/deploy/images/vek385-qemu/vek385-combined.qemuboot.conf [config_bsp] deploy_dir_image = . image_link_name = edf-linux-disk-image-amd-cortexa78-mali-common.rootfs image_name = edf-linux-disk-image-amd-cortexa78-mali-common.rootfs-20251120145551 kernel_imagetype = Image machine = versal-2ve-2vm-vek385-sdt-seg qb_cmdline_ip_slirp = ip=dhcp qb_cmdline_ip_tap = ip=192.168.7.@CLIENT@::192.168.7.@GATEWAY@:255.255.255.0::eth0:off:8.8.8.8 net.ifnames=0 qb_default_fstype = wic.ufs qb_default_kernel = BOOT-versal-2ve-2vm-vek385-sdt-seg.bin qb_drive_type = /dev/sd qb_dtb = qemu-hw-devicetrees/multiarch/board-versal2-psxc-vek385.dtb qb_graphics = qb_kernel_cmdline = none qb_kernel_cmdline_append = swiotlb=0 qb_machine = -machine arm-generic-fdt qb_mem = -m 8G qb_network_device = -net nic,netdev=net0,macaddr=@MAC@ -net nic qb_nfsrootfs_extra_opt = qb_no_pni = net.ifnames=0 qb_opt_append = -serial null -serial null -serial null -serial mon:stdio -nodefaults -boot mode=8 -bootbin versal_2ve_2vm:@DEPLOY_DIR_IMAGE@/qemu-ospi-versal-2ve-2vm-vek385-sdt-seg.bin -plm-args ' -M microblaze-fdt -device loader,addr=0xf0000000,data=0xba020004,data-len=4 -device loader,addr=0xf0000004,data=0xb800fffc,data-len=4 -device loader,addr=0xF1110624,data=0x0,data-len=4 -device loader,addr=0xF1110620,data=0x1,data-len=4 -hw-dtb @DEPLOY_DIR_IMAGE@/qemu-hw-devicetrees/multiarch/board-versal2-pmxc-virt.dtb -display none ' -asu-args ' -M riscv-fdt -hw-dtb @DEPLOY_DIR_IMAGE@/qemu-hw-devicetrees/multiarch/board-versal2-asu-virt.dtb -display none ' -drive file=@DEPLOY_DIR_IMAGE@/qemu-ospi-versal-2ve-2vm-vek385-sdt-seg.bin,if=mtd,format=raw,index=0 qb_rng = qb_rootfs = qb_rootfs_extra_opt = qb_rootfs_opt = -device scsi-hd,drive=d1,bus=scsi.0,channel=0,scsi-id=0,lun=0,logical_block_size=4096,physical_block_size=4096 -drive file=@ROOTFS@,if=none,id=d1,format=raw qb_serial_opt = qb_smp = qb_system_name = qemu-system-aarch64-multiarch serial_consoles = 115200;ttyAMA0 staging_bindir_native = ../../../work/x86_64-linux/qemu-helper-native/1.0/recipe-sysroot-native/usr/bin staging_dir_host = ../../../work/versal_2ve_2vm_vek385_sdt_seg-amd-linux/xilinx-bootbin/1.0/recipe-sysroot staging_dir_native = ../../../work/versal_2ve_2vm_vek385_sdt_seg-amd-linux/xilinx-bootbin/1.0/recipe-sysroot-native tune_arch = aarch64 uninative_loader = ../../../sysroots-uninative/x86_64-linux/lib/ld-linux-x86-64.so.2Launch the QEMU boot instance by executing the runqemu script.
Booting an EDF machine in qemu will take several minutes. At multiple points the boot will apparently stop for long periods of time. On a relatively well-resourced host machine the entire boot process to get to the Linux user-space cmdline takes over 6 minutes. Getting to qemu’s APU line takes around 3 minutes, at which time it pauses for about 2 minutes before continuing.
$ runqemu tmp/deploy/images/vek385-qemu/vek385-combined.qemuboot.conf nographic slirpQemu Boot Log:
Upon first boot, Linux requires a password to be set for the default user account.
AMD EDF 25.05 (VEK385 EA) - AMD Vivado Design Suite 2025.1 and later
Default username 'amd-edf'
Build and Deploy Zephyr RTOS DomU Guest OS Image
A pre-built Zephyr RTOS Xen DomU Guest OS Image is provided. This Guest OS uses zephyr-synchronization kernel image. See https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3258155011#Versal%E2%84%A2--Xen-UEFI-Zephyr-RTOS-DomU-Guest-OS. The following tutorial show how to build and deploy Zephyr RTOS Guest OS Image.
Set up the build environment and Yocto Project pre-requisites by following Yocto Project build setup instructions for EDF
Use xenvmgicv3 machine to build zephyr-synchronization or other zephyr kernel available in
meta-zephyr/meta-zephyr-core/recipes-kernel/zephyr-kernel at rel-v2025.2 · Xilinx/meta-zephyr . See https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/edit-v2/3450568721#XENVMGICV3%3A Create a recipe xen-custom-domu-image
$ mkdir -p <meta-custom-layer>/recipes-extended/xen-custom-domu-image $ mkdir -p <meta-custom-layer>/recipes-extended/xen-custom-domu-image/xen-custom-domu-image $ touch <meta-custom-layer>/recipes-extended/xen-custom-domu-image/xen-custom-domu-image.bbTo run Zephyr RTOS DomU Guest OS first we need to create Xen DomU configuration file.
$ touch <meta-custom-layer>/recipes-extended/xen-custom-domu-image/xen-custom-domu-image/zephyr-helloworld-xenvmgicv3.cfgContent of
<meta-custom-layer>/recipes-extended/xen-custom-domu-image/xen-custom-domu-image/zephyr-helloworld-xenvmgicv3.cfgshould look like below.# Guest name name = "zephyr-helloworld-xenvmgicv3" # Kernel image to boot kernel = "/boot/zephyr-helloworld-xenvmgicv3.bin" # Kernel command line options - Allocate 32MB memory = 32 # Number of VCPUS vcpus = 1Copy the zephyr-helloworld-xenvmgicv3.bin file from
${TMPDIR}/deploy/images/xenvmgicv3directory to<meta-custom-layer>/recipes-extended/xen-custom-domu-image/xen-custom-domu-image/or you can host this binary is web similar to Index of /sswreleases/rel-v2025.2/edf_files/2025.2/11100846/external/zephyr-synchronization$ cp -r ${TMPDIR}/deploy/images/xenvmgicv3/zephyr-helloworld-xenvmgicv3-{TIMESTAMP}.bin <meta-custom-layer>/recipes-extended/xen-custom-domu-image/xen-custom-domu-image/zephyr-helloworld-xenvmgicv3.binNow add these files to recipe and deploy to rootfs as shown.
SUMMARY = "Xen Zephyr Hello World DomU Guest OS image recipe" LICENSE = "MIT" LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302" SRC_URI = "\ file://zephyr-helloworld-xenvmgicv3.bin \ file://zephyr-helloworld-xenvmgicv3.cfg \ " COMPATIBLE_MACHINE = "^$" COMPATIBLE_MACHINE:versal = "${MACHINE}" COMPATIBLE_MACHINE:versal-2ve-2vm = "${MACHINE}" do_configure() { : } do_compile() { : } do_install () { install -d ${D}/boot install -m 0644 ${WORKDIR}/zephyr-helloworld-xenvmgicv3.bin ${D}/boot install -d -m 0755 ${D}${sysconfdir}/xen install -m 0644 ${WORKDIR}/zephyr-helloworld-xenvmgicv3.cfg ${D}${sysconfdir}/xen/zephyr-helloworld-xenvmgicv3.cfg } FILES:${PN} += " \ /boot/* \ ${sysconfdir}/xen/zephyr-helloworld-xenvmgicv3.cfg \ "Add xen-custom-domu-image recipe to
IMAGE_INSTALLvariable inbuild/conf/local.conffile.IMAGE_INSTALL:append = "xen-custom-domu-image"Build the edf-linux-disk-image.
Versal Gen 1
$ MACHINE=amd-cortexa72-common bitbake edf-linux-disk-imageVersal Gen 2
$ MACHINE=amd-cortexa78-mali-common bitbake edf-linux-disk-image
See https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3258155011/Discovery+and+Evaluation+AMD+Versal+Device+Portfolio#Versal%E2%84%A2--Xen-UEFI-Zephyr-RTOS-DomU-Guest-OS on how to run xen domu image.
Building an SDK for application cross-development
A pre-built SDK for Linux application cross-development is provided. See https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3258089893/Software+Application+Development#Software-Application-Development-using-SDK for instructions on how to use it. An SDK is not extensible; if a user wants to enable additional dev tools or dev packages in the SDK, a new SDK has to be built and installed. The following tutorial shows how to modify and build a new EDF Linux SDK.
Set up the build environment and Yocto Project pre-requisites by following Yocto Project build setup instructions for EDF
The SDK recipe can be found in sources/meta-amd-edf/recipes-core/meta/meta-edf-app-sdk.bb
This recipe can be modified to contain additional recipes as required - append the recipes to the TOOLCHAIN_TARGET_TASK list
Build the SDK for application development.
sdk-build:
$ MACHINE=amd-cortexa78-mali-common bitbake meta-edf-app-sdk $ ls tmp/deploy/amd-cortexa78-mali-common/installer/*.sh tmp/deploy/sdk/amd-edf-glibc-x86_64-meta-edf-app-sdk-cortexa72-cortex-amd-cortexa78-mali-common-toolchain-25.05+release-e984eec4e65cbc06d089cfe812760f5307a95f7e.shSee https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/3258089893/Software+Application+Development#SDK-installation for installing the generated SDK and for cross-building an example hello world application.
Related Links
Trademarks
Yocto Project and all related marks and logos are trademarks of The Linux Foundation. This website is not, in any way, endorsed by the Yocto Project or The Linux Foundation.
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
© 2025 Advanced Micro Devices, Inc. Privacy Policy