Build a CentOS 8 System for Zynq UltraScale+ on an OpenStack Cloud Image

This article describes a flow to build a CentOS 8 system for Zynq UltraScale+ which includes building UEFI firmware, building the CentOS 8 kernel and installing these images on a prebuilt CentOS 8 OpenStack cloud image.

Table of Contents

Introduction

Embedded Linux systems are typically lightweight and cross compiled from build frameworks such as Yocto, Petalinux or Buildroot. However, as embedded systems become more powerful and look more like embedded servers, ARM64 machines are capable of running these higher level Linux distributions. Distros such as CentOS provide attractive features such as SELinux which are security enhancements including mandatory access controls (MAC), updated security patches, package managers such as Yum or DNF for installing software remotely from trusted repositories, a stable ABI for maintaining compatible software and modules and a well known compute environment. These distros maintain their own kernel repositories by pulling from mainline and applying patches. Xilinx is committed to upstreaming drivers, so drivers accepted by mainline will automatically get merged into the distro kernel source when they pull a corresponding release. Distros also provide a build system generally based on the package manager to authenticate source, sign binaries and automate the build (fetch, patch, configure, build, package).

Generally Linux distros such as CentOS expect to be natively built, but in this article we will cross compile on an x86_64 CentOS host machine. This article walks through a process to build and run CentOS 8 on a Zynq UltraScale+ device. The system built in this article is a server build, so its a headless system with a serial console. The intent is to provide a workflow for building and deploying CentOS 8 on ZCU102.

In this article you will:

  1. Build a UEFI firmware

  2. Build a bootstrap kernel

  3. Build the kernel RPM packages

  4. Install the UEFI and kernel on a prebuilt raw disk image

Building a CentOS 8 system for Zynq UltraScale+ is currently a proof-of-concept (PoC). You should not assume this builds a Xilinx supported, production-ready CentOS 8 system. Please do your due diligence before deploying CentOS 8 and regression test against your system requirements.

Requirements

  • CentOS 8 VM or bare-metal host machine

    • Minimum hard disk space

      • 25 GB without UEFI build

      • 50 GB with UEFI build

    • Sudo privileges

    • Terminal emulator

    • Internet access

  • ZCU102 evaluation board

    • SD card (8 GB minimum/16 GB recommended)

Porting Philosophy and Strategy

The philosophy was to make the ZCU102 look as much like a standard Linux PC or server and rely on CentOS for most of the heavy lifting. Essentially we want to do as little work as possible. The strategy was to isolate the components where Xilinx tools build the low level architecture dependent components, UEFI firmware and DTB and the distro provides the kernel source, bootloader components and the root filesystem. The focus is on the kernel, which needs to be configured and built to support Xilinx platforms. Fortunately Xilinx has committed to upstreaming drivers, so many of the required drivers needed to boot the system are already present in the CentOS 8 kernel tree. That said, that the timing of getting these commits accepted upstream can be lengthy, so there may be significant lags until they are accepted. Therefore upstream kernel versions may not have the equivalent driver support of a similar kernel version in the Xilinx GitHub repository. If you need patches or drivers that are not upstreamed, then you will need to backport them yourself or build a kernel from the Xilinx GitHub repository.

CentOS 8 Host Machine

Install Host Package Dependencies

At this point I assume you have administered a base CentOS 8 machine including sudo privilege, terminal emulator and internet access. I recommend allocating 25 GB or more free space. If you are building the UEFI firmware, then you should allocate a minimum of 50 GB. First lets install some host dependencies.

1 2 3 4 5 $ sudo dnf group install "Development Tools" $ sudo dnf install audit-libs-devel binutils-devel elfutils-devel java-devel \ kabi-dw ncurses-devel newt-devel numactl-devel openssl-devel pciutils-devel \ python3-devel python3-docutils xmlto xz-devel libguestfs-tools qemu-img $ sudo dnf install epel-release

You will also need to install an aarch64 cross compiler.

1 $ sudo dnf install gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu

As of this writing, the CentOS 8 repos do not include an aarch64 cross compiler, so you may need to manually install one. If the install succeeds, then you may skip to the next section.

CentOS 8 supports GCC 8.2.1, so I recommend sticking with the same version to guarantee ABI compatibility. You may install GCC from any provider such as ARM, Linaro or Xilinx Vitis. The link below will pull the aarch64 cross compiler 8.2.1 binaries from ARM.

1 2 $ wget https://developer.arm.com/-/media/Files/downloads/gnu-a/8.2-2019.01/gcc-arm-8.2-2019.01-x86_64-aarch64-linux-gnu.tar.xz $ tar xf gcc-arm-8.2-2019.01-x86_64-aarch64-linux-gnu.tar.xz

Once the cross compiler is downloaded, extract it to a location of your choice and add the bin directory to your PATH. For example, I installed it in $HOME/bin and prepended PATH in my “.bashrc” file.

1 PATH="$HOME/bin:$HOME/bin/gcc-arm-8.2-2019.01-x86_64-aarch64-linux-gnu/bin:$PATH"

Prepare Workspace

Here we are going to simply create a basic top level directory structure from which we will build our system.

1 2 $ export COS_BUILD=$HOME/centos8-dev $ mkdir -p $COS_BUILD/rpmbuild $COS_BUILD/images/{efi,boot,rootfs} $COS_BUILD/deploy/{efi/dtb/xilinx,boot}

Build the UEFI Firmware

In order to boot a Zynq UltraScale+ device, the boot ROM expects a “boot.bin” binary image. The “boot.bin” is composed of partitions that hold images for various portions of the boot process and run-time. In a Linux system this image generally holds the FSBL, PMU firmware, PL bitstream and u-boot. For the purposes of this article, I am going to ignore the PL bitstream. These images can be built and packaged in several ways, but I’ll show how to build it it through Xilinx Yocto.

UEFI is a standardized interface for low-level firmware, which is an evolution from the legacy PC BIOS. Fortunately, UEFI support has been ported to u-boot, so we will be able to take advantage of this when building our system. The UEFI firmware (u-boot) will be able to directly boot the GRUB bootloader, which is provided by the installer without making any software changes. An essential part of this process is the u-boot distro boot mechanism, which will search a list of media for an EFI image, bootaa64.efi, from an EFI disk partition, load it and pass control to it.

I will loosely refer to boot.bin as UEFI firmware.

At this point I assume you understand the low-level boot components required to boot Linux on a ZCU102, so I will not go into the details here. Instead I will provide a summary of the steps to build the UEFI firmware with Xilinx Yocto. If you need more details on the low level boot components or the Xilinx Yocto flow, please see these references.

As of this writing, Yocto does not officially support CentOS 8. To build the UEFI for ZCU102, you will need to install CROPS/Docker or build on a Yocto supported host.

After installing Yocto host package dependencies and cloning the git repos, you just need to source the setupsdk script to initialize the environment. Then bitbake the xilinx-bootbin recipe which will build the UEFI firmware components and package them into a “boot.bin” image.

1 2 $ source setupsdk $ MACHINE=zcu102-zynqmp bitbake xilinx-bootbin

Copy the generated “boot.bin” into the deploy directory we created earlier.

1 $ cp tmp/deploy/images/zcu102-zynqmp/BOOT-zcu102-zynqmp.bin $COS_BUILD/deploy/efi/boot.bin

Build the Kernel

This section walks through two builds of the kernel. The first is a bootstrap kernel, which is a straight forward make build. The bootstrap kernel does not rely on any kernel modules to mount the final root filesystem. The second kernel is built with the CentOS RPM build system. This build system pulls directly from the CentOS kernel repository which also includes patches, kernel configurations and build scripts.

Clone the CentOS 8 Kernel Source

Here we are going to clone “centos-git-common” which includes build scripts and the “kernel” repository which includes source and spec files for the RPM build. The “get_sources.sh” script will fetch the kernel source tarball.

1 2 3 4 5 6 $ cd $COS_BUILD/rpmbuild $ git clone https://git.centos.org/centos-git-common.git $ git clone https://git.centos.org/rpms/kernel.git $ cd kernel $ git checkout -b kernel-4.18.0-147.3.1.el8_1.xlnx imports/c8/kernel-4.18.0-147.3.1.el8_1 $ ../centos-git-common/get_sources.sh

Build a Bootstrap Kernel

Because the installed CentOS kernel image may not include some critical configurations required for Zynq UltraScale+, you may need to build a bootstrap kernel. Once we boot the bootstrap kernel, we can formally install the kernel RPMS which will installs the newly configured kernel and updates the initrd and GRUB configuration.

1 $ grep 'SERIAL_XILINX\|ARCH_ZYNQMP\|SDHCI_OF_ARASAN' SOURCES/kernel-aarch64.config

These are the minimal kernel configs required to boot the ZCU102 from an SD card with serial console.

1 2 3 4 CONFIG_ARCH_ZYNQMP=y CONFIG_SERIAL_XILINX_PS_UART=y CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y CONFIG_MMC_SDHCI_OF_ARASAN=m

If all of these are options are enabled, then the installed kernel should boot on the ZCU102 and you can skip to “Build the Kernel RPM Packages.”

First we need to run the “%prep” stage in our SPEC file which will unpack and patch the kernel source.

1 rpmbuild --define "%_topdir `pwd`" -bp SPECS/kernel.spec

Next we will configure the kernel for bootstrapping.

1 2 3 4 $ cd $COS_BUILD/rpmbuild/kernel/BUILD/kernel-4.18.0-147.3.1.el8_1/linux-4.18.0-147.3.1.el8.xlnx.aarch64 $ cp ../../../SOURCES/kernel-aarch64.config .config $ make ARCH=arm64 olddefconfig $ make ARCH=arm64 menuconfig

Make the configuration changes shown below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 CONFIG_ARCH_ZYNQMP=y CONFIG_SERIAL_XILINX_PS_UART=y CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y CONFIG_MMC=y CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SDHCI_OF_ARASAN=y CONFIG_EXT4_FS=y CONFIG_MSDOS_FS=y CONFIG_VFAT_FS=y CONFIG_BLK_DEV_RAM=y CONFIG_MMC_BLOCK=y CONFIG_BLK_DEV_DM=y CONFIG_XFS_FS=y

The build the kernel.

1 $ make -j$(nproc) ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- EXTRAVERSION=-zynqmp.bootstrap

Save the compressed image and the configuration of the bootstrap kernel to the deploy directory.

1 2 $ cp arch/arm64/boot/Image.gz $COS_BUILD/deploy/boot/Image-zcu102-bootstrap.gz $ cp .config $COS_BUILD/deploy/boot/config_zcu102_bootstrap

Finally, let’s be diligent and clean the bootstrap build in preparation for the RPM kernel build.

1 $ make distclean

Build the Kernel RPM Packages

Create a Kernel Configuration Fragment

Create the configuration fragment file “kernel-aarch64-zynqmp.cfg” in the SOURCES directory with the kernel configuration as shown below.

This is a minimal set of options for Zynq UltraScale+. You may add additional configuration options here.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 $ cd $COS_BUILD/rpmbuild/kernel/SOURCES $ cat << EOF > kernel-aarch64-zynqmp.cfg CONFIG_ARCH_ZYNQMP=y CONFIG_SERIAL_XILINX_PS_UART=y CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y CONFIG_MMC_SDHCI_OF_ARASAN=m CONFIG_I2C_CADENCE=m CONFIG_GPIO_ZYNQ=m CONFIG_NET_CADENCE=y CONFIG_MACB=m CONFIG_RTC_DRV_ZYNQMP=m CONFIG_CADENCE_WATCHDOG=m CONFIG_USB_DWC3=m CONFIG_XILINX_ZYNQMP_DMA=m CONFIG_SPI_ZYNQMP_GQSPI=m EOF

Create a Custom Kernel SPEC File

The SPEC file is the script which controls the build of an RPM package. We will make a few edits to a copy of the default aarch64 SPEC file to support Zynq UltraScale+.

1 2 3 $ cd $COS_BUILD/rpmbuild/kernel/SPECS $ cp kernel.spec kernel-xlnx.spec $ vi kernel-xlnx.spec

Locate where “buildid” is commented out and define it as “.xlnx”.

1 2 # define buildid .local %define buildid .xlnx

Add the source for the configuration fragment as shown below. You can add this in source sequence.

1 Source1000: kernel-aarch64-zynqmp.cfg

Inside the function “BuildKernel”, add line #2 as shown below. This will merge our configuration fragment with the default kernel configuration.

1 2 cp configs/$Config .config scripts/kconfig/merge_config.sh -m -r .config %{SOURCE1000}

Commit the changes you made including the SPEC file and configuration fragment.

1 2 $ git add SPECS/kernel-xlnx.spec SOURCES/kernel-aarch64-zynqmp.cfg $ git commit -m'added zynqmp machine' -a

Build the Kernel RPM Packages

1 2 3 $ cd $COS_BUILD/rpmbuild/kernel $ rpmbuild --define "%_topdir `pwd`" -bb SPECS/kernel-xlnx.spec --with cross --target aarch64 --with baseonly --without debuginfo \ --without tools --without perf --without bpftool --without selftests --nodeps --noclean

If the build succeeds, the binary packages are located in the RPMS directory. Last thing we need to do is deploy our DTB and RPM packages we just built.

1 $ cp BUILDROOT/kernel-4.18.0-147.3.1.el8.xlnx.aarch64/boot/dtb-4.18.0-147.3.1.el8.xlnx.aarch64/xilinx/zynqmp-zcu102-rev1.0.dtb $COS_BUILD/deploy/efi/dtb/xilinx/
1 cp RPMS/aarch64/*.rpm $COS_BUILD/deploy/boot/

Install UEFI and Kernel on a Preinstalled CentOS Disk Image

We will install the bootloader and kernel on a prebuilt CentOS 8 Stream OpenStack cloud disk image.

Installing Centos 8 from an ISO is beyond the scope of this article. Please see https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/235471261 to install from an ISO.

1 2 $ cd $COS_BUILD/images $ wget https://cloud.centos.org/centos/8-stream/aarch64/images/CentOS-Stream-GenericCloud-8-20200113.0.aarch64.qcow2

Change the root password with “virt-customize” to “zynqmp” or a password of your choice.

1 2 3 4 5 6 $ virt-customize -a CentOS-Stream-GenericCloud-8-20200113.0.aarch64.qcow2 --root-password password:zynqmp [ 0.0] Examining the guest ... [ 33.4] Setting a random seed [ 33.7] Setting the machine ID in /etc/machine-id [ 33.8] Setting passwords [ 51.3] Finishing off

The installed file is a QCOW2 (QEMU copy-on-write) virtual disk, so we need to convert it to a raw disk image.

1 $ qemu-img convert CentOS-Stream-GenericCloud-8-20200113.0.aarch64.qcow2 CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw

Now lets examine the partitions in this disk image. You will see there are 3 partitions. The first is the EFI System Partition (ESP) which is a FAT formatted partition. The second partition is a Linux formatted partition.

1 2 3 4 5 6 7 8 9 10 11 $ fdisk -l CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw Disk CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 079AB561-A1E1-4340-A744-C041B93E2E67 Device Start End Sectors Size Type CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw1 2048 1230847 1228800 600M EFI CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw2 1230848 17614847 16384000 7.8G Linu

The first thing we note is that “Disklabel type” is set to “gpt”. However, Zynq UltraScale+ only support MBR partitioned disks, so we need to convert from GPT to MBR.

1 2 3 4 5 6 $ sgdisk -m 1:2 CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw Warning: The kernel is still using the old partition table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.

We also need to mark the 1st sector as the boot partition.

1 2 3 4 5 $ sfdisk -A CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw 1 The bootable flag on partition 1 is enabled now. The partition table has been altered. Syncing disks.

Now we can examine the partitions again and note that the “Disklabel type” is now set to “dos” (MBR) and the boot flag is set on the first partition.

1 2 3 4 5 6 7 8 9 10 11 $ fdisk -l CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw Disk CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw1 * 2048 1230847 1228800 600M ef EFI CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw2 1230848 17614847 16384000 7.8G 83 Linu

We need to setup the map devices for the raw disk image we prepared earlier. Mount the two partitions on “efi” and “rootfs” respectively.

If you are installing on a raw disk image created from an ISO, then there may be three partitions, EFI, boot and rootfs. In this case you will want to mount the second partition on “boot” and copy the kernel RPM packages here.

1 2 3 4 5 6 $ cd $COS_BUILD/images $ sudo kpartx -va CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw add map loop0p1 (253:3): 0 1228800 linear 7:0 2048 add map loop0p2 (253:4): 0 16384000 linear 7:0 1230848 $ sudo mount /dev/mapper/loop0p1 efi $ sudo mount /dev/mapper/loop0p2 rootfs

The Zynq UltraScale+ boot ROM will search for “boot.bin” on the first partition. The boot ROM will load FSBL and PMU firmware. FSBL will in turn load u-boot. U-boot acting as the UEFI firmware will search the current partition for the corresponding board DTB file and pass it to the EFI bootloader through the EFI configuration table. If it doesn’t find one, it will pass its own DTB to the EFI bootloader. However, this DTB may not be compatible with your kernel version, so I recommend using the DTB built with the kernel. So let's install the UEFI firmware (boot.bin) and ZCU102 DTB as shown below from the deploy directory.

1 $ sudo cp -r ../deploy/efi/* efi/

Since we are not using UEFI secure boot, we will boot GRUB directly. So we need to replace shim (bootaa64.efi) with GRUB (grubaa64.efi).

1 $ sudo cp efi/EFI/centos/grubaa64.efi efi/EFI/BOOT/bootaa64.efi

UEFI firmware will search for an image named bootaa64.efi which is why we needed to rename the EFI image.

Next lets install the bootstrap kernel and kernel RPM packages into the BOOT directory.

1 $ sudo cp ../deploy/boot/* rootfs/boot/

Since we are installing on a cloud image, we need to disable the cloud-init to make it look more like a standard distro.

1 $ sudo touch rootfs/etc/cloud/cloud-init.disabled

Skip this step if you are installing on a disk image generated from an ISO or want to keep the OpenStack initialization.

Finally, let’s unmount the partitions and cleanup the device mappings.

1 2 3 4 $ sudo umount efi $ sudo umount rootfs $ sudo kpartx -d CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw loop deleted : /dev/loop0

Now the raw disk image is ready to write to the SD card. With the SD card plugged into your CentOS machine, note the device node of your card (/dev/sdX) and install the image as shown below.

1 $ sudo dd if=CentOS-Stream-GenericCloud-8-20200113.0.aarch64.raw of=/dev/sdX bs=4M iflag=fullblock oflag=direct status=progress; sync

You may use “dmesg” to find the device node corresponding to your SD card.

Booting ZCU102

Open a terminal emulator of your choice on your host machine and connect to the serial port on the ZCU102. Insert the SD card on the ZCU102, make sure the dip switches are set to SD boot and power on the board. If everything goes well, you should see an FSBL banner, PMU firmware version and U-boot messages.

U-boot should find our EFI binary, bootaa64.efi (GRUB), and then start the EFI application.

If you needed to build the bootstrap kernel, stop the boot process by pressing “e” in the GRUB boot menu. Add “Image-zcu102-bootstrap.gz earlycon console=ttyPS0,115200 noefi” to the “linux” line. Continue the boot by pressing “Ctrl-x”. If you didn’t need the bootstrap kernel, you may let the boot continue.

The GRUB boot menu delay is short, so you need to be ready and quick to halt the boot. Otherwise GRUB will automatically load the default kernel which may not be configured for zynqmp.

After a short delay, you should see EFI stub messages followed by standard kernel messages.

Login as “root” and you should now be at a Linux prompt.

Before we install the kernel, we need to ensure that the SD/MMC modules are included in the initramfs by adding it to a dracut configuration file as shown below. If you need any additional kernel modules for boot that are not included by default, you may add them here as well.

1 # echo "add_drivers+=\"mmc_block sdhci-of-arasan\"" > /etc/dracut.conf.d/zcu102.conf

Now we can install the kernel RPM package we built. We are going to actually update (-U) the kernel which implies removing the old kernel. That makes sense here since the default kernel will not boot on ZCU102.

In general you will want to use -ivh rather than -Uvh to install a new kernel which will keep the old kernels.

1 # rpm -Uvh /boot/kernel-*.rpm

Optional: Make any changes required to the GRUB configuration. Since the u-boot UEFI has minimal run-time services, you can remove them with “noefi” kernel command line parameter.

1 2 3 # vi /etc/default/grub ## add "noefi" and any additional kernel options to GRUB_CMDLINE_LINUX then save # grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg

Finally reboot the system.

1 # systemctl reboot

Once you get to the GRUB menu, you should see that the first option is your updated kernel.

References

CentOS Sources

UEFI on Top of U-Boot