This article describes a flow to build a CentOS 8 system for Zynq UltraScale+ which includes building UEFI firmware, building the CentOS 8 kernel and installing these images on a prebuilt CentOS 8 OpenStack cloud image.
Embedded Linux systems are typically lightweight and cross compiled from build frameworks such as Yocto, Petalinux or Buildroot. However, as embedded systems become more powerful and look more like embedded servers, ARM64 machines are capable of running these higher level Linux distributions. Distros such as CentOS provide attractive features such as SELinux which are security enhancements including mandatory access controls (MAC), updated security patches, package managers such as Yum or DNF for installing software remotely from trusted repositories, a stable ABI for maintaining compatible software and modules and a well known compute environment. These distros maintain their own kernel repositories by pulling from mainline and applying patches. Xilinx is committed to upstreaming drivers, so drivers accepted by mainline will automatically get merged into the distro kernel source when they pull a corresponding release. Distros also provide a build system generally based on the package manager to authenticate source, sign binaries and automate the build (fetch, patch, configure, build, package).
Generally Linux distros such as CentOS expect to be natively built, but in this article we will cross compile on an x86_64 CentOS host machine. This article walks through a process to build and run CentOS 8 on a Zynq UltraScale+ device. The system built in this article is a server build, so its a headless system with a serial console. The intent is to provide a workflow for building and deploying CentOS 8 on ZCU102.
In this article you will:
Build a UEFI firmware
Build a bootstrap kernel
Build the kernel RPM packages
Install the UEFI and kernel on a prebuilt raw disk image
Building a CentOS 8 system for Zynq UltraScale+ is currently a proof-of-concept (PoC). You should not assume this builds a Xilinx supported, production-ready CentOS 8 system. Please do your due diligence before deploying CentOS 8 and regression test against your system requirements.
CentOS 8 VM or bare-metal host machine
Minimum hard disk space
25 GB without UEFI build
50 GB with UEFI build
ZCU102 evaluation board
SD card (8 GB minimum/16 GB recommended)
Porting Philosophy and Strategy
The philosophy was to make the ZCU102 look as much like a standard Linux PC or server and rely on CentOS for most of the heavy lifting. Essentially we want to do as little work as possible. The strategy was to isolate the components where Xilinx tools build the low level architecture dependent components, UEFI firmware and DTB and the distro provides the kernel source, bootloader components and the root filesystem. The focus is on the kernel, which needs to be configured and built to support Xilinx platforms. Fortunately Xilinx has committed to upstreaming drivers, so many of the required drivers needed to boot the system are already present in the CentOS 8 kernel tree. That said, that the timing of getting these commits accepted upstream can be lengthy, so there may be significant lags until they are accepted. Therefore upstream kernel versions may not have the equivalent driver support of a similar kernel version in the Xilinx GitHub repository. If you need patches or drivers that are not upstreamed, then you will need to backport them yourself or build a kernel from the Xilinx GitHub repository.
CentOS 8 Host Machine
Install Host Package Dependencies
At this point I assume you have administered a base CentOS 8 machine including sudo privilege, terminal emulator and internet access. I recommend allocating 25 GB or more free space. If you are building the UEFI firmware, then you should allocate a minimum of 50 GB. First lets install some host dependencies.
As of this writing, the CentOS 8 repos do not include an aarch64 cross compiler, so you may need to manually install one. If the install succeeds, then you may skip to the next section.
CentOS 8 supports GCC 8.2.1, so I recommend sticking with the same version to guarantee ABI compatibility. You may install GCC from any provider such as ARM, Linaro or Xilinx Vitis. The link below will pull the aarch64 cross compiler 8.2.1 binaries from ARM.
$ wget https://developer.arm.com/-/media/Files/downloads/gnu-a/8.2-2019.01/gcc-arm-8.2-2019.01-x86_64-aarch64-linux-gnu.tar.xz
$ tar xf gcc-arm-8.2-2019.01-x86_64-aarch64-linux-gnu.tar.xz
Once the cross compiler is downloaded, extract it to a location of your choice and add the bin directory to your PATH. For example, I installed it in $HOME/bin and prepended PATH in my “.bashrc” file.
In order to boot a Zynq UltraScale+ device, the boot ROM expects a “boot.bin” binary image. The “boot.bin” is composed of partitions that hold images for various portions of the boot process and run-time. In a Linux system this image generally holds the FSBL, PMU firmware, PL bitstream and u-boot. For the purposes of this article, I am going to ignore the PL bitstream. These images can be built and packaged in several ways, but I’ll show how to build it it through Xilinx Yocto.
UEFI is a standardized interface for low-level firmware, which is an evolution from the legacy PC BIOS. Fortunately, UEFI support has been ported to u-boot, so we will be able to take advantage of this when building our system. The UEFI firmware (u-boot) will be able to directly boot the GRUB bootloader, which is provided by the installer without making any software changes. An essential part of this process is the u-boot distro boot mechanism, which will search a list of media for an EFI image, bootaa64.efi, from an EFI disk partition, load it and pass control to it.
I will loosely refer to boot.bin as UEFI firmware.
At this point I assume you understand the low-level boot components required to boot Linux on a ZCU102, so I will not go into the details here. Instead I will provide a summary of the steps to build the UEFI firmware with Xilinx Yocto. If you need more details on the low level boot components or the Xilinx Yocto flow, please see these references.
As of this writing, Yocto does not officially support CentOS 8. To build the UEFI for ZCU102, you will need to install CROPS/Docker or build on a Yocto supported host.
After installing Yocto host package dependencies and cloning the git repos, you just need to source the setupsdk script to initialize the environment. Then bitbake the xilinx-bootbin recipe which will build the UEFI firmware components and package them into a “boot.bin” image.
This section walks through two builds of the kernel. The first is a bootstrap kernel, which is a straight forward make build. The bootstrap kernel does not rely on any kernel modules to mount the final root filesystem. The second kernel is built with the CentOS RPM build system. This build system pulls directly from the CentOS kernel repository which also includes patches, kernel configurations and build scripts.
Clone the CentOS 8 Kernel Source
Here we are going to clone “centos-git-common” which includes build scripts and the “kernel” repository which includes source and spec files for the RPM build. The “get_sources.sh” script will fetch the kernel source tarball.
Because the installed CentOS kernel image may not include some critical configurations required for Zynq UltraScale+, you may need to build a bootstrap kernel. Once we boot the bootstrap kernel, we can formally install the kernel RPMS which will installs the newly configured kernel and updates the initrd and GRUB configuration.
$ cd $COS_BUILD/images
$ wget https://cloud.centos.org/centos/8-stream/aarch64/images/CentOS-Stream-GenericCloud-8-20200113.0.aarch64.qcow2
Change the root password with “virt-customize” to “zynqmp” or a password of your choice.
$ virt-customize -a CentOS-Stream-GenericCloud-8-20200113.0.aarch64.qcow2 --root-password password:zynqmp
[ 0.0] Examining the guest ...
[ 33.4] Setting a random seed
[ 33.7] Setting the machine ID in /etc/machine-id
[ 33.8] Setting passwords
[ 51.3] Finishing off
The installed file is a QCOW2 (QEMU copy-on-write) virtual disk, so we need to convert it to a raw disk image.
Now lets examine the partitions in this disk image. You will see there are 3 partitions. The first is the EFI System Partition (ESP) which is a FAT formatted partition. The second partition is a Linux formatted partition.
The first thing we note is that “Disklabel type” is set to “gpt”. However, Zynq UltraScale+ only support MBR partitioned disks, so we need to convert from GPT to MBR.
$ sgdisk -m 1:2 CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
GPT data structures destroyed! You may now partition the disk using fdisk or
We also need to mark the 1st sector as the boot partition.
$ sfdisk -A CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw 1
The bootable flag on partition 1 is enabled now.
The partition table has been altered.
Now we can examine the partitions again and note that the “Disklabel type” is now set to “dos” (MBR) and the boot flag is set on the first partition.
$ fdisk -l CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw
Disk CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw1 * 2048 1230847 1228800 600M ef EFI
CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw2 1230848 17614847 16384000 7.8G 83 Linu
We need to setup the map devices for the raw disk image we prepared earlier. Mount the two partitions on “efi” and “rootfs” respectively.
If you are installing on a raw disk image created from an ISO, then there may be three partitions, EFI, boot and rootfs. In this case you will want to mount the second partition on “boot” and copy the kernel RPM packages here.
$ cd $COS_BUILD/images
$ sudo kpartx -va CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw
add map loop0p1 (253:3): 0 1228800 linear 7:0 2048
add map loop0p2 (253:4): 0 16384000 linear 7:0 1230848
$ sudo mount /dev/mapper/loop0p1 efi
$ sudo mount /dev/mapper/loop0p2 rootfs
The Zynq UltraScale+ boot ROM will search for “boot.bin” on the first partition. The boot ROM will load FSBL and PMU firmware. FSBL will in turn load u-boot. U-boot acting as the UEFI firmware will search the current partition for the corresponding board DTB file and pass it to the EFI bootloader through the EFI configuration table. If it doesn’t find one, it will pass its own DTB to the EFI bootloader. However, this DTB may not be compatible with your kernel version, so I recommend using the DTB built with the kernel. So let's install the UEFI firmware (boot.bin) and ZCU102 DTB as shown below from the deploy directory.
$ sudo cp -r ../deploy/efi/* efi/
Since we are not using UEFI secure boot, we will boot GRUB directly. So we need to replace shim (bootaa64.efi) with GRUB (grubaa64.efi).
You may use “dmesg” to find the device node corresponding to your SD card.
Open a terminal emulator of your choice on your host machine and connect to the serial port on the ZCU102. Insert the SD card on the ZCU102, make sure the dip switches are set to SD boot and power on the board. If everything goes well, you should see an FSBL banner, PMU firmware version and U-boot messages.
U-boot should find our EFI binary, bootaa64.efi (GRUB), and then start the EFI application.
If you needed to build the bootstrap kernel, stop the boot process by pressing “e” in the GRUB boot menu. Add “Image-zcu102-bootstrap.gz earlycon console=ttyPS0,115200 noefi” to the “linux” line. Continue the boot by pressing “Ctrl-x”. If you didn’t need the bootstrap kernel, you may let the boot continue.
The GRUB boot menu delay is short, so you need to be ready and quick to halt the boot. Otherwise GRUB will automatically load the default kernel which may not be configured for zynqmp.
After a short delay, you should see EFI stub messages followed by standard kernel messages.
Login as “root” and you should now be at a Linux prompt.
Before we install the kernel, we need to ensure that the SD/MMC modules are included in the initramfs by adding it to a dracut configuration file as shown below. If you need any additional kernel modules for boot that are not included by default, you may add them here as well.
Now we can install the kernel RPM package we built. We are going to actually update (-U) the kernel which implies removing the old kernel. That makes sense here since the default kernel will not boot on ZCU102.
In general you will want to use -ivh rather than -Uvh to install a new kernel which will keep the old kernels.
# rpm -Uvh /boot/kernel-*.rpm
Optional: Make any changes required to the GRUB configuration. Since the u-boot UEFI has minimal run-time services, you can remove them with “noefi” kernel command line parameter.
# vi /etc/default/grub
## add "noefi" and any additional kernel options to GRUB_CMDLINE_LINUX then save
# grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
Finally reboot the system.
# systemctl reboot
Once you get to the GRUB menu, you should see that the first option is your updated kernel.