Zynq UltraScale+ MPSoC - IPI Messaging Example
The following example design is a demonstration on how to use Inter-Processor Interrupt (IPI) messaging interface between the PMU Firmware and APU/RPU/MicroBlaze processors located in the PS and the FPGA. The example helps users to understand how to add additional custom message capabilities to the PMU Firmware as well as to familiarize themselves with the different software drivers that can be used to take advantage of the IPI hardware.
Table of Contents
Introduction
The IPI hardware is used to communicate the different processors available in the Zynq UltraScale+ MPSoC device, though a series of buffers and interrupt signals. This is the mechanism used by the PMU Firmware to provide services to the APU or RPU processors using the XilPM library. In addition to these services, users might want to implement their own specific service in the PMU Firmware with communication capabilities.
The following example designs showcase how to modify the PMU Firmware code to add a custom module that makes use of the IPI communication layer, and how to implement a bare-metal application in the RPU and MicroBlaze processors as well as a Linux application in the APU that makes use of the IPI.
IPI (Inter-Processor Interrupt)
The IPI hardware is extensively described in a specific section within the Zynq UltraScale+ MPSoC TRM (UG1085). The implementation is based on multiple interrupt registers and message buffers and does not have any kind of specific protocol. The message buffers are limited to 32 bytes for a request and 32 bytes for response, so higher amounts of data can be exchanged using the buffers to provide pointers to other larger buffers.
As many of the other controllers in the device, the embedded software package provides specific drivers that can be used by the different processors. It is important to recall that the drivers are only available for the APU and the RPU, so processors implemented in the PL (i.e. MicroBlaze) cannot use the driver and need to implement their own driver instead.
Mailbox Library
The XilMailbox library provides the top-level hooks for sending or receiving an IPI message using the Zynq UltraScale+ MPSoC IPI hardware.
Libmetal
The libmetal library provides common user APIs to access devices, handle device interrupts, and request memory across different operating environments. In this case, the library provides communication APIs that can be used from Linux userspace without using any specific driver to access the hardware.
Application
The example application is used to demonstrate messaging between the PMU and the different processors available in the device. The control flow is simple as the PMU Firmware monitors incoming messages to detect when all the processors started executing. Once a message is received from each one of them, it starts sending messages periodically to them.
Hardware Block Design
The hardware design required to implement this example design is fairly simple with two MicroBlaze processors with dedicated interrupt handlers that are connected to the IPI channel 7 and 8 signals. These interrupt signals are exposed to the PL side when the S_AXI_LPD is selected in the IPI-Master Mapping configuration shown in the bellow image. The MicroBlaze processors are also connected to the Processing Subsystem using the HP0/HP1 ports in order to be able to access the IPI address space.
The Processing Subsystem configuration is the default one for the ZCU102 board excluding the IPI configuration. The below image shows the channel assignment selected, which assigns two interrupts to the APU. The reason behind this configuration is the fact that the channel 0 is dedicated to the zynqmp-ipi-mailbox kernel driver, that is used by the zynqmp-firmware kernel driver.
Software
PMU Firmware
The PMU Firmware architecture is extensively described in the Zynq UltraScale+ MPSoC Software Developers Guide (UG1137). The two key elements for this example are the concept of PMU Firmware Modules and the Handling of IPI within the PMU Firmware.
Custom Module
As discussed in the PMU Firmware Usage section of UG1137, the PMU Firmware already provides a custom module initialization prototype that can be used to create a new module. Based on the code sample provided in the documentation, a custom IPI messaging module has been added to the source code defining a custom IPI ID and adding a scheduled task (XPFW_IPI_MSG_SEND_TIME
period) to implement the example functionality.
static void IpiModCfgInit(const XPfw_Module_t *ModPtr, const u32 *CfgData, u32 Len)
{
XPfw_CoreScheduleTask(ModPtr, XPFW_IPI_MSG_SEND_TIME, XPfw_SendIpi);
}
void ModCustomInit(void)
{
IpiModPtr = XPfw_CoreCreateMod();
XPfw_CoreSetCfgHandler(IpiModPtr, IpiModCfgInit);
XPfw_CoreSetIpiHandler(IpiModPtr, IpiHandler, XPFW_IPI_ID);
}
IPI Handler
The IPI handler in the custom module is only triggered when the IPI manager layer receives a message with the associated module ID header (in this case XPFW_IPI_ID
). For this example design, the handler is used to receive the initial message from the application processors, so the scheduled task starts sending messages to the processor that sent the message.
static void IpiHandler(const XPfw_Module_t *ModPtr, u32 IpiNum, u32 SrcMask, const u32* Payload, u8 Len)
{
for(u32 idx=0; idx < 4; idx++) {
if((channel[idx].mask == SrcMask) && (channel[idx].init == 0)) {
channel[idx].init = 1;
XPfw_Printf(DEBUG_PRINT_ALWAYS,"PMUFW: IPI received from %s\r\n", channel[idx].name);
}
}
}
Each channel is represented by a mask value and a string representing the channels name that is printed in the serial port for monitoring purposes.
/* IPI Channels used in the example */
ipi_ch_t channel[] = {
{IPI_PMU_0_IER_RPU_0_MASK, 0, "RPU0"},
{IPI_PMU_0_IER_RPU_1_MASK, 0, "RPU1"},
{1<<24, 0, "MB0"}, // MicroBlaze #0 assigned to PL0
{1<<25, 0, "MB1"}, // MicroBlaze #1 assigned to PL1
{1<<27, 0, "APU"} // APU assigned to PL3
};
IPI Messaging Task
The IPI messaging in the custom module is performed through the specific API provided by the PMU Firmware core. The XPfw_IpiWriteMessage and XPfw_IpiReadResponse are used for sending the message and reading the reply, while XPfw_IpiTrigger and XPfw_IpiPollForAck are for control purposes. The task monitors whether all the channels have been initialized and if so will send a message to every channel, read the response and check if the counter value returned is equal to the one that has been send.
R5#0 Application
The RPU0 application is based on the IPI driver, which is used to send the initial message to the PMU Firmware as well as to reply to messages incoming from the PMU. This driver provides fine granularity to generate the interrupts as well as to write messages or responses.
RPU to PMU message
The application starts sending an initial message to the PMU channel 0 with the custom module ID so it gets handled by the custom module discussed earlier in this page. There is no additional data added to the message as it is only used by the PMU Firmware to monitor active processors.
PMU to RPU messages
Messages from the PMU are handled by the IPI interrupt handler registered in the application code. The interrupt handler makes use of the IPI driver’s API to read the incoming message and return the same value in the response buffer.
R5#1 Application
The RPU1 application is based on the mailbox library, which is used to send the initial message to the PMU Firmware as well as to reply to messages incoming from the PMU. This library abstracts the IPI driver API for a more intuitive approach but lacks fine granularity to generate the interrupts. For example, the mailbox library generates the interrupt signal to the target processor when writing in the response buffer using the XMailbox_SendData function. Additionally, the mailbox driver checks whether the GIC Distributor is already initialized and skips the interrupt exception configuration step. This implementation is not suitable for a multicore cluster as the second processor running the XIpiPs_RegisterIrq function will skip the configuration of the exception. Therefore initializing the GIC and enabling the exceptions is a required step in the application code.
RPU to PMU message
The Application starts sending an initial message to the PMU channel 0 with the custom module ID using the XilMailbox API such that an interrupt is generated automatically in the target. There is no additional data added to the message as it is only used by the PMU Firmware to monitor active processors.
PMU to RPU messages
The Mailbox interrupt handler is responsible for taking care of incoming messages through the IPI interface. In this case, using both the receive and send API calls the message is loopback to the PMU without any change.
MicroBlaze #0 and #1 Applications
The MicroBlaze application does not have access to the IPI driver in its domain BSP code, therefore individual register access is required to handle the communication channel.
MicroBlaze #0:
MicroBlaze #1:
MB to PMU
The application makes use of register access functions to write the initial message and to trigger the IPI interrupt signal.
PMU to MB
Similar to the code used by the RPU#0 processor with the IPI driver API, the interrupt handler is just a simple buffer loopback implementation, finishing with a register access that clears the interrupt signal, and generating an acknowledgment in the PMU clearing the OBS bit.
Linux
The Linux application cannot access directly to the IPI register space as userspace applications do not have direct access to physical memory/addresses. Instead, the application will make use of libmetal library to have access to the required IPI address space, and in this way implement the message handling in top of it. As documented in the Libmetal and OpenAMP User Guide, the libmetal implementation for Linux is based on the UIO kernel driver, which will define the available address space.
Application
The Linux application is quite similar to the MicroBlaze processor based application in that libmetal is only used to have read/write access to the IPI address space. There is no other driver that implements the IPI functionality and therefore both message writing or interrupt triggering operations are performed with individual write functions. The main difference is the need to initialize the libmetal devices and memory regions as well as the interrupt handler.
Device-Tree
As mentioned previously, the libmetal library is built in top of the UIO kernel driver, which requires device-tree nodes to describe the address space available for the driver. Additionally, the UIO driver requires the usage of the kernel’s command line parameters to define the UIO driver’s id name using the uio_pdrv_genirq.of_id parameter.
The following system-user.dtsi file can be used in PetaLinux based Linux image:
Example Source Files
This example design has been tested using a ZCU102 board and the Vivado/Vitis/PetaLinux 2023.1 release.
It can be easily reproduced using the following files in the github repository:
zcu102_ipi_bd.tcl to regenerate the BD design in a Vivado project targeting the ZCU102 board
main_r5_0.c, main_r5_1.c, main_mb0.c and main_mb1.c as baremetal application code targeting the different processors in the system
xpfw_mod_custom.c as custom PMU Firmware module code
testapp.c as Linux application
Example Results
© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy