Xen Hypervisor internals for Zynq UltraScale+

This page covers various topics on how Xen works internally when running on the ZynqMP. 

Table of Contents

VM Memory Map

Without Xen

When running Operating Systems or bare-metal applications natively (without a hypervisor) on the ZynqMP, SW needs to target the native memory map of the ZynqMP as described in the TRM.
This includes adhering to the base addresses of each device, linking code towards the base addresses of memory (DDR, OCM or TCM) and using the interrupt numbers allocated and described in the ZynqMP TRM.

Guests on top of Xen - Dom0:

When running Operating Systems on top of Xen, things work a little bit differently.
Xen will create a memory map for every guest.

For Dom0, Xen will create a memory map that is essentially the same memory map as for the native host platform. I.e, on the ZynqMP, dom0 sees the same memory map as it would when running without Xen.
A few differences are that dom0 will not see any memory not allocated to it. E.g, memory used by Xen or by other guests.
Another special thing for dom0 is the fact that all devices and all memory gets mapped 1:1.

This means that a Guest Physical Address (IPA in ARM terminology) will map to the same Physical Address.

Guests on top of Xen - DomU:

For any additional guests (DomU), Xen will create an artificial memory map and interrupt map.
This Xen VM memory map will look the same on all ARMv8 platforms running a given version of Xen, thus making any OS or bare-metal program that runs on top of it portable to any ARMv8 platform that runs on top of Xen (excluding the use of device pass-through).

In 4.8, here are the important parts for us:

GICv2 Distributor and CPU interface:
GUEST_GICD_BASE xen_mk_ullong(0x03001000)
GUEST_GICC_BASE xen_mk_ullong(0x03002000)

2 banks of memory, low and high:
GUEST_RAM0_BASE 0x40000000 /* 3GB of low RAM @ 1GB */
GUEST_RAM0_SIZE 0xc0000000
GUEST_RAM1_BASE 0x0200000000 /* 1016GB of RAM @ 8GB */
GUEST_RAM1_SIZE 0xfe00000000

All the details of the Xen VM memory map are described here.

Users extend the Xen VM memory map by mapping in memory areas from the host platform using iomem lines in the VM configuration.

Share-ability and Memory Attributes


Xen uses the 2-stage MMU in ARMv8 cores to create the virtualized memory map used by guests.

The 2nd translation stage allows Xen to:

1. Isolate guests from each other
2. To create arbitrary memory maps that don't need to map 1:1 to physical memory
3. To control Share-ability and Memory attributes for memory regions

When ARMv8 cores process memory transactions through the MMU, page-table fields control whether memory is shared within the inner or outer domains and the semantics of the transaction in terms of strict ordering, gathering, early acking and so on. Both the stage-1 (the guests page-tables) and the stage-2 (Xens page-tables) can specify the share-ability and memory attributes.
When translation is done, the MMU will merge the stage-1 and stage-2 attributes (basically selecting the most "compatible" option).
Details on the rules for combining/merging the attributes can be found in the ARMv8 specs, version A.e, section D4.5.3 "Combining the stage1 and stage2 attributes..."

Examples:
If either stage specifies Outer shareable, the MMU will use Outer shareable.
If either stage specifies Device memory, the MMU will use Device memory.
If either stage specifies Non-Gathering, the MMU will use non-Gathering.
etc

In Xen 4.8, all mappings created for dom0 area created with the most relaxed settings, i.e Normal Memory, Cached and Outer Shareable. This essentially gives the dom0 guest full control over the attributes with its own stage-1 page-tables.

For domUs, Xen will map the vGIC as Device nGnRE and allocated memory as Normal Memory, Cached and Outer Shareable.
Any memory regions mapped in by the user (via iomem lines in the VM config) will be mapped as Device nGnRE.

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy