Building Xen Hypervisor with PetaLinux 2022.2

This page includes information on how to build and deploy Xen on Xilinx boards, including both Xilinx UltraScale+ and Versal boards.

Table of Contents

Overview

The guide below shows you how to build Xen, boot Xen and then run some example configurations. The steps below use PetaLinux and assume you have some knowledge of using PetaLinux. Before starting you need to create a PetaLinux project. It is assumed that a default PetaLinux reference design is used unchanged in these instructions. The default PetaLinux configuration has images ready to boot Xen, these are the pre-built images. You can use those or you can manually edit recipes and build Xen yourself. The pre-built images can be found in this directory (inside a PetaLinux project) pre-built/linux/xen.

You can either use the pre-builts or follow the next section to configure and build Xen yourself. If you are using the pre-builts you can skip to the booting Xen section for your release version.

Refer to the PetaLinux documentation for more information on creating a project

Configuring and Building XEN from Source using PetaLinux

First let's create a PetaLinux project:

petalinux-create -t project --template zynqMP -n xilinx-zcu cd xilinx-zcu/

Then let's enable Xen to be built by default:

$ petalinux-config -c rootfs

Now let's enable Xen and Xen tools:

Petalinux Package Groups ---> packagegroup-petalinux-xen ---> [*] packagegroup-petalinux-xen Filesystem Packages ---> console ---> tools ---> xen ---> [*] xen-tools

Now we need to change the rootFS to be an INITRD:

And change:

This means that any images built will NOT have the rootFS in the Image that is built by PetaLinux. This means you will need to edit any scripts or configuration that expects the rootFS to be included. This includes the Xen configs mentioned later.

You can still use the prebuilt Image file which does still include the rootFS to boot DomU.

We also want to edit the device tree to build in the extra Xen related configs.


Edit this file:

and add this line: /include/ "xen.dtsi".
It should look like this for hardware:

or like this for QEMU:

There is a bug on QEMU where the CPUs running in SMP sometimes cause hangs. To avoid this we only tell Xen about a single CPU.

Also edit this file:

If the project was created from a BSP, add this line to it: file://xen.dtsi. If the project was not created from a BSP, use file://zynqmp-xen.dtsi or file://versal-xen.dtsi as appropriate for ZU+ or Versal.

The first few lines of the file should look like the following:

Then run petalinux-build:

TFTP Booting Xen and Dom0

Run Xen dom0 on QEMU

To use the prebuilt Xen, run the following:

To use the Xen you built yourself, run the following:

Root login and auto-login is disabled by default, the default user is set to petalinux with password expiration on first login.

Run Xen dom0 on Hardware

To use the prebuilt Xen on hardware, run the following:

To use the Xen you built yourself, run the following:

On the serial terminal, you should eventually see something similar to the following. When you do, press any key to stop the autoboot:

You will need to manually set the serverip (it is typically 10.0.2.2 for QEMU, adjust it according to your network setup):

To download and boot Xen, follow the directions in the next section.

If running on QEMU, use xen-qemu.dtb otherwise, use xen.dtb.

TFTPing Xen using ImageBuilder

ImageBuilder is a set of Open Source community scripts to automatically configure a Xen system with Dom0 and multiple Dom0-less VMs for booting. ImageBuilder can generate a U-Boot script that loads all of the binaries automatically and boot the full system quickly. ImageBuilder is available here. Clone the repository:

The main script is scripts/uboot-script-gen and its usage is described in details on the Xen Project wikipage as well as in the project's README.md.

PetaLinux prebuilt binaries can be used in a config file as follows for uboot-script-gen:

Now uboot-script-gen can be used to generate boot.scr:

Or, to use Xen you built yourself:

Boot the system with the following uboot commands (assuming the tftp serverip is 10.0.2.2 and ipaddr is 10.0.2.15, which are typically the values for QEMU):

The Xen and Dom0 command line are generated by uboot-script-gen. 

If you would like to change anything, for instance increase the dom0 memory allocation, it is always possible by editing boot.source.

Simply do the following:

  • edit boot.source and change dom0_mem to dom0_mem=2G

  • regenerate boot.scr with the following command:
    mkimage -A arm64 -T script -C none -a 0xC00000 -e 0xC00000 -d boot.source boot.scr

SD Booting Xen and Dom0

To boot Xen from an SD card you need to copy the following files to the boot partition of the SD card:

  1. boot.scr

  2. xen-Image (or Image if using self-built)

  3. The compiled device tree file renamed to system.dtb (xen.dtb or xen-qemu.dtb for QEMU from the pre-built images, system.dtb from a self-built PetaLinux build)

  4. xen

  5. xen-rootfs.cpio.gz (or rootfs.cpio.gz if using self-built)

When using the pre-built images from the BSP, copy these files from <project-dir>/pre-built/linux/xen.

Booting with ImageBulider

ImageBuilder's script uboot-script-gen can be used to generate a U-Boot script that loads all of the binaries automatically from the MMC. Call uboot-script-gen with the following command, assuming that $sdbootdev is 0 and $partid is 1:

Copy the generated boot.scr onto the boot partition of the SD card, in addition to the files mentioned in the list above. Boot the system with the following U-Boot command:

In case of QEMU, one needs to generate the sd.img using the following command:

And then boot with the following command:

Starting Simple Additional Guests

If running on QEMU, we will need to set up a port mapping for port 22 (SSH) in our VM.
In this example, we forward the hosts port 2222 to the VM's port 22:

 

Once you hit the U-Boot prompt, follow the steps in the earlier section on how to run Xen dom0.
When dom0 has finished booting, we will need to copy a guest Image into dom0's filesystem.
We will use the base prebuilt PetaLinux Image as our domU guest.

 

If running on QEMU, we use scp's -P option to connect to our hosts port 2222 where QEMU will forward the connection to the guests port 22:
To target QEMU, run the following on the host:


If running on hardware, run the following on the host:

If you would prefer to load DomU's kernel to the guest via SD card, you can follow the instructions in the "Starting Linux guests with Pass-through networking" section.

The xen-image-minimal rootFS includes some prepared configurations that you can use. These are located in '/etc/xen/':

To start a simple guest, copy the xen-rootfs.cpio.gz to /boot by running the following command on the host:

On the QEMU, edit /etc/xen/example-simple.cfg so that it contains the following :

Run the following from the dom0 prompt:

You will see another instance of Linux booting up.
At any time you can leave the console of the guest and get back to dom0 by pressing ctrl+].
Once at the dom0 prompt you can list the guests from dom0:

To get back to the guests console:

You can create further guests by running commands similar to the following:

To destroy a guest:

CPU Pinning

The following will only work on QEMU with multi-core enabled or on real hardware.
When running multiple guests with multiple Virtual CPUs, Xen will schedule the various vCPUs onto real physical CPUs.
The rules and considerations taken in scheduling decisions depend on the chosen scheduler and the configuration.
To avoid having multiple vCPUs share a single pCPU, it is possible to pin a vCPU onto a pCPU and to give it exclusive access.

To create a simple guest with one Virtual CPU pinned to Physical CPU #3, you can do the following:

 

Another way to pin virtual CPUs on to Physical CPUs is to create dedicated cpu-pools.
This has the advantage of isolating the scheduling instances.

By default a single cpu-pool named Pool-0 exists. It contains all of the physical CPUs.
We will now create our pool named rt using the credit2 scheduler.

Now we are ready to create a guest with a single VCPU pinned to physical CPU #3.

Starting Linux Guests with Para-Virtual Networking (PV Network)

This time we will run QEMU slightly differently. We will create two port mappings, one for dom0's SSH port and another for the Para-Virtual domU.
The default IP addresses assigned by QEMU's builtin DHCP server start from 10.0.2.15 and count upwards.
Dom0 will be assigned 10.0.2.15, the next guest 10.0.2.16 and so on.
Below is the command line that maps host port 2222 to dom0 port 22 and 2322 to domUs port 22.

Now, follow the instructions from section 1 on how to boot Xen dom0.


Once you are at the dom0 prompt and have copied a domU image (see earlier steps) we will need to set up the networking.
In this example, we will configure the guests to directly join the external network by means of a bridge.

First of all, we need to de-configure the default setup.


Kill the dhcp client for eth0:

List and remove existing addresses from eth0:

In our example the address is 10.0.2.15/24:

Then, create the bridge and start DHCP on it for dom0:

You should see something like the following:

Similar to before we will use the pre-defined examples in '/etc/xen/':

Edit example-pvnet.cfg to look like this:

The start DomU:

You should see a new Linux instance boot up.
Now we'll ssh into the domU from the host running Para-Virtual networking:

Starting Linux Guests with Pass-through Networking

It is possible to directly assign the network peripheral to a domU on both Zynq UltraScale+ and Versal. The following example is for Zynq UltraScale+.

Turn xen.dtb into xen.dts:

Then, edit xen.dts by adding xen,passthrough; under the node of the device to assign, in this case ethernet@ff0e0000:

Convert xen.dts back into xen.dtb:

Create a custom rootfs for dom0 that contains /boot/xen-Image and /boot/xen-rootfs.cpio.gz by following these steps:

The following is the config file to be used with imagebuilder/scripts/uboot-script-gen:

The alternative method without modifying the xen.dtb is presented below. We instruct ImageBuilder that /axi/ethernet@ff0e0000  will be used for passthrough.
This way it will automatically add the xen,passthrough property into the corresponding node in the host dtb:

When dom0 boots up, you can confirm that Ethernet is not up:

Edit /etc/xen/example-passnet.cfg as follows:

Then, boot the guest0:

And confirm that guest0 can access Ethernet:

Starting a Guest with a Passthrough SD Card

It is possible to directly assign the MMC controller to a domU on both Zynq UltraScale+ and Versal. The following example is for Zynq UltraScale+ only. It allows a Xen DomU full unmediated access to an SD card plugged into the board.

Add the following lines to the xl config file for the VM:

The partial device tree is here https://github.com/Xilinx/xen-passthrough-device-trees/blob/master/device-trees-2021.2/zcu102/mmc%40ff170000.dts. Download and compile the partial device tree:

Also, you will need to add xen,passthrough; under the mmc node in the host device tree (system.dtb or xen.dtb) so that it does not get automatically assigned to dom0:

or you can use the following config file for the ImageBuilder without having to modify the host dtb:

Finally, make sure to run the following commands at boot from xsdb to configure the system so that normal-world MMC DMA goes via the SMMU:

Dom0less

For MMC assignment to a dom0less guest, the following are the contents of the ImageBuilder config:

Regenerate the boot.scr and boot.source scripts as usual with ImageBuilder’s uboot-script-gen.

Remember to issue the three special writes at boot time to configure the system so that normal-world MMC DMA goes via the SMMU.

Starting a Guest with a Passthrough SATA Disk

It is possible to directly assign the SATA controller to a domU on both Zynq UltraScale+ and Versal. The following example is for Zynq UltraScale+ only. It allows a Xen DomU full unmediated access to any SATA disks connected to it.

Add the following lines to the xl config file for the VM:

Where passthrough-example-sata.dts is https://github.com/Xilinx/xen-passthrough-device-trees/blob/master/device-trees-2021.2/zcu102/ahci%40fd0c0000.dts  

Then, you need to add xen,passthrough; under the ahci@fd0c0000 node in the host device tree so that it does not get automatically assigned to dom0 or append the following to the ImageBuilder config file:

Finally, make sure to run the following commands at boot from xsdb to configure the system so that normal-world SATA DMA goes via the SMMU:

Starting a Guest with Passthrough UART1 Serial

It is possible to assign serial@ff010000 to a domU.

To do so, use the following ImageBuilder config file (called config_serial1_xen):

Run  "bash <path_to>/imagebuilder/scripts/uboot-script-gen -c config_serial1_xen -d . -t tftpb"

Add the following lines to the domU config file:

Where the serial@ff010000.dtb is 

If your domU is Linux, make sure to add console=ttyPS0,115200 to its kernel command line. In case of dom0less DomUs, you will need to edit the boot.source script to add the command line option, then recreate boot.scr with mkimage.

Starting a Guest with Passthrough UART0 Serial

It is possible to assign serial@ff000000 to a domU.

For this, you need to assign serial1 (ie serial@0xff010000) to Xen and Dom0.

Use the following ImageBuilder config file (called config_serial0_xen):

Run  "bash <path_to>/imagebuilder/scripts/uboot-script-gen -c config_serial0_xen -d . -t tftpb"

Add the following lines to the domU config file:

Where the serial@ff000000.dtb is

Using vTPM with Xen

Run project:

At u-boot prompt, run below to start xen and Linux boot:

Once dom0 is done booting, create a vtpm socket using following:

cd /etc/xen. SCP Xen-Image and xen-rootfs.cpio.gz if those aren't added in dom0 roots.

Next, edit example-simple.cfg file to look like this:

Finally, create domU using

User should be prints like below where XEN and QEMU's vTPM are exchanging data with swtpm:

Please refer to the following file for detailed logs.

NOTE: For other tpm driver related test please enable tpm driver in your kernel image and then use the Kernel image.

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy