...
Code Block |
---|
|
$ mmc dev $sdbootdev &&&& mmcinfo; load mmc $sdbootdev:$partid 0xC00000 boot.scr; source 0xC00000 |
Graphical Desktop in Dom0
ZU+ only.
To get a graphic desktop, e.g. matchbox, working in dom0, it is necessary to add two SMIDs to device tree: the SMID 0xce3 for zynqmp-display@fd4a0000 and SMID 0xce4 for dma@fd4c0000. The attached DTB comes with the necessary modifications.
Starting simple additional guests
If running on QEMU, we'll need to setup a port mapping for port 22 (SSH) in our VM.
In this example, we forward the hosts port 2222 to the VM's port 22.
Code Block |
---|
|
$ petalinux-boot --qemu --u-boot --qemu-args "-net nic -net nic -net nic -net nic -net user,tftp=pre-built/linux/xen,hostfwd=tcp:127.0.0.1:2222-10.0.2.15:22" |
Once you hit the u-boot prompt, follow the steps in the earlier section on how to run Xen dom0.
When dom0 has finished booting, we'll need to copy a guest Image into dom0's filesystem.
We'll use the base prebuilt PetaLinux Image as our domU guest.
If running on QEMU, we use scp's -P option to connect to our hosts port 2222 where QEMU will forward the connection to the guests port 22:
To target QEMU run the following on the host:
Code Block |
---|
|
scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 2222 pre-built/linux/xen/xen-Image root@localhost:/boot/Image |
If running on hardware run the following on the host:
Code Block |
---|
|
scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no pre-built/linux/xen/xen-Image root@<board-ip>:/boot/Image |
If you would prefer to load DomU's kernel to the guest via SD card, you can follow the instructions in the "Starting Linux guests with Pass-through networking" section.
The xen-image-minimal rootFS includes some prepared configurations that you can use. These are located in '/etc/xen/'
To start a simple guest run the following from the dom0 prompt
Code Block |
---|
|
# xl create -c example-simple.cfg |
You'll see another instance of Linux booting up.
At any time you can leave the console of the guest and get back to dom0 by pressing
ctrl+].
Once at the dom0 prompt you can list the guests from dom0:
To get back to the guests console:
Code Block |
---|
|
# xl console guest0 |
You can create further guests by for example running:
Code Block |
---|
|
# xl create example-simple.cfg name=\"guest1\"
# xl create example-simple.cfg name=\"guest2\"
root@plnx_aarch64:/etc/xen# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 512 1 r----- 79.8
Domain-0 0 512 1 r----- 79.8
guest0 1 256 2 ------ 93.7
guest1 2 256 2 ------ 26.6
guest2 3 256 2 ------ 1.8 |
To destroy a guest:
Code Block |
---|
|
# xl destroy guest0 |
CPU Pinning
The following will only work on QEMU with multi-core enabled or on real HW.
When running multiple guests with multiple Virtual CPUs, Xen will schedule the various vCPUs onto real physical CPUs.
The rules and considerations taken in scheduling decisions depend on the chosen scheduler and the configuration.
To avoid having multiple vCPUs share a single pCPU, it is possible to pin a vCPU onto a pCPU and to give it exclusive access.
To create a simple guest with one Virtual CPU pinned to Physical CPU #3, you can do the following:
Code Block |
---|
|
xl create example-simple.cfg 'name="g0"' 'vcpus="1"' 'cpus="3"' |
Another way to pin virtual CPUs on to Physical CPUs is to create dedicated cpu-pools.
This has the advantage of isolating the scheduling instances.
By default a single cpu-pool named Pool-0 exists. It contains all the physical cpus.
We'll now create our pool named rt using the credit2 scheduler.
Code Block |
---|
|
xl cpupool-create 'name="rt"' 'sched="credit"'
xl cpupool-cpu-remove Pool-0 3
xl cpupool-cpu-add rt 3 |
Now we are ready to create a guest with a single vcpu pinned to physical CPU #3.
Code Block |
---|
|
xl create /etc/xen/example-simple.cfg 'vcpus="1"' 'pool="rt"' 'cpus="3"' 'name="g0"' |
Starting Linux guests with Para-Virtual networking (PV network)
...