Xen Shared Memory and Interrupts Between VMs
This article describes how to configure VM-to-VM notifications using event channels.
Table of Contents
Taxonomy
Xen: the Xen Project hypervisor |
VM: Virtual Machine |
OS: Operating System |
Dom0: privileged VM, booted directly from Xen |
DomU: regular unprivileged VM, started from Dom0 using the “xl” tool |
Dom0less DomU: regular unprivilehed VM started from Xen at boot (in parallel with Dom0) |
Introduction
A simple, yet effective, way to setup VM-to-VM communication between Xen guests is plain shared memory and notifications. Shared memory can be used to setup a ring buffer and exchange data, while notifications can be used to avoid having to poll for updates on the ring.
Xen offers several ways to share memory between regular DomUs, between Dom0 and DomUs, and between Dom0less DomUs. See this page on how to setup shared memory.
Notifications using event channels
Xen comes with “event channels”: a simple, yet powerful and flexible way for domains to send each other notifications.
Linux comes with complete support for event channels already. You can use the functions already provided to allocate, bind, and send event channels notifications to other domains.
If you are using another OS, or would like to know how it works under the hood, this article provides information and code snippets on how to use Xen event channels in a baremetal OS or RTOS without any prior support for it.
Example
The following is an easy-to-embed header file that can be included in any application to provide a very small implementation of the Xen interfaces, just enough to get Xen event channels working:
The following is the example application that makes use of xen.h to setup an event channel. apu.c is used by two domains, the sender and the receiver; behavior changes depending on the domid.
Receiving Xen events in general
PPI
Xen events are delivered as GUEST_EVTCHN_PPI
, statically defined as interrupt 31. Firstly, register a handler in your OS for PPI 31.
gic_init(GUEST_EVTCHN_PPI);
aarch64_set_irq_h(irq_handler);
Shared_info page
Information about event channels is provided on a memory page called shared_info
because it is shared between the VM and Xen. In order to access the shared_info page it is necessary to register it:
xen_register_shared_info(shared_info);
Event Channels Parsing
Upon receiving an interrupt from Xen, we can check the bitmasks on the shared_info page to know which event exactly was delivered. xen.h provides an implementation of the parsing function which can be called from the interrupt handler.
/* event handling */
static inline void handle_event_irq(struct shared_info *s,
void (*do_event)(unsigned int event))
{
uint64_t l1, l2, l1i, l2i;
unsigned int port;
int cpu = 0;
struct vcpu_info *vcpu_info = &s->vcpu_info[cpu];
vcpu_info->evtchn_upcall_pending = 0;
mb();
l1 = xchg(&vcpu_info->evtchn_pending_sel, 0);
while ( l1 != 0 )
{
l1i = __ffs(l1);
l1 &= ~(1UL << l1i);
l2 = xchg(&s->evtchn_pending[l1i], 0);
while ( l2 != 0 )
{
l2i = __ffs(l2);
l2 &= ~(1UL << l2i);
port = (l1i * sizeof(uint64_t)) + l2i;
do_event(port);
}
}
}
Allocating and Binding Event Channels
The previous section described how to receive event channels in general. This chapter describes how to allocate and bind one event channel so that it can be used for notifications between domains.
In example.c the domains detect their domids by issuing a debug hypercall.
void debug_get_domid()
{
register uintptr_t a0 asm("x0");
__asm__ __volatile__("hvc 0xfffd\n"
: "=r" (a0)
: "0" (a0));
domid = a0;
}
The receiver domain allocates a new unbound event channel by calling
EVTCHNOP_alloc_unbound
.struct evtchn_alloc_unbound alloc; alloc.dom = DOMID_SELF; alloc.remote_dom = remote_domid; alloc.port = 0; ret = xen_hypercall(EVTCHNOP_alloc_unbound, (unsigned long)&alloc, 0, 0, HYPERVISOR_event_channel_op);
The sender domain needs to know the remote event channel number, typically refer to as
port
, to bind to it. As the remote port number is dynamically allocated by theEVTCHNOP_alloc_unbound
hypercall, it is best if the port number is passed over shared memory from the receiver to the sender./* wait for readiness signal */ while (1) { if (strcmp(shared_mem, "go") == 0) break; mb(); } mb(); /* read port number of the other domain */ memcpy(&remote_port, shared_mem + 4, sizeof(remote_port));
The sender domain issues an
EVTCHNOP_bind_interdomain
call to bind to the remote port.struct evtchn_bind_interdomain bind; bind.remote_dom = remote_domid; bind.remote_port = remote_port; bind.local_port = 0; ret = xen_hypercall(EVTCHNOP_bind_interdomain, (unsigned long)&bind, 0, 0, HYPERVISOR_event_channel_op);
The sender domain can start sending notifications to the receiver domain by issuing
EVTCHNOP_send hypercalls
struct evtchn_send send; send.port = bind.local_port; xen_hypercall(EVTCHNOP_send, (unsigned long)&send, 0, 0, HYPERVISOR_event_channel_op);
© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy