Versal PLM / R5 / A72 IPI Messaging

The page describes an example to demonstrate how to add an additional Inter-Processor Interrupt (IPI) messaging interface between PLM and APU/RPU software. The pre-defined IPI functions in PLM and client libraries would suffice for most of the cases. This page helps users to understand how additional custom IPI messaging can be added on a need basis. Please consider PLM space limitations for such customization in the PLM.

Table of Contents

Introduction

IPI messages are used by the R5 and A72 to communicate client requests to the PLM server of a Versal Adaptive SoC. Users might desire to have application specific communication between the APU/RPU and the PLM. In addition to available APIs for IPI messages, users might desire to have other application specific communication between the APU/RPU and the PLM. The following prototype illustrates the PLM changes required to run a bare-metal application on the R5 CPU and a Linux application on the A72 CPU. This prototype demonstrates methods, but is not intended to be a tutorial on any specific aspect or tool. The user is expected to have basic skills with Xilinx tools including Vitis, PetaLinux, and Vivado.

Expectations

Vivado, Vitis and PetaLinux projects are not provided such that users are expected to build the infrastructure for the prototype.

Making changes to the Versal PLM should be carefully considered with the following details.

  1. The PLM firmware is a key component of Versal in that it provides many services to the application processing units such as the RPU and APU. Only minor changes, rather than significant architecture changes, should be done to the PLM.

  2. Any changes to the PLM firmware should be minimized and localized to allow for updates to future releases of the PLM.

  3. The PLM firmware runs on the PMC hardware which has limited available memory such that adding features may require existing features to be removed at build time. The PLM software is documented at https://docs.xilinx.com/r/en-US/ug1304-versal-acap-ssdg/Platform-Loader-and-Manager.

Background

IPI (Inter-Processor Interrupt)

The IPI message infrastructure of the Versal hardware provides a mesh like network to allow communication between the application CPUs and the PLM. The infrastructure is primitive with only interrupts and message buffers without any specific protocol. The message buffers of the hardware are limited to 32 bytes for a request and 32 bytes for a response. The concepts for this subject are very detailed and important for clarity in the following prototype. See the Versal TRM for more details about IPI at https://docs.xilinx.com/r/en-US/am011-versal-acap-trm/Introduction .

Libmetal

Libmetal is a library provided by Xilinx which abstracts hardware (metal) to a level such that the library API can be used with bare metal (standalone BSP), FreeRTOS, or Linux. The library is part of the OpenAMP project at https://github.com/OpenAMP/libmetal. AMD Xilinx has its own fork of the repository at which might be less up-to-date and is the source for the library that is delivered with Vitis. The Xilinx libmetal library is described in (UG1186) the Libmetal and OpenAMP User Guide.

Mailbox Library

The mailbox library, Xilmailbox, is provided by Xilinx in Vitis. It provides an abstraction layer for non-Linux environments above the IPI hardware for a simpler communication API. More details can be found at Xilmailbox Wiki Page and XilMailbox Library.

Prototype Description

The prototype is designed to illustrate communication between the APU/RPU and the PLM. A single IPI message consists of a request message followed by a response message and is referred to as unidirectional communication for this prototype. Another alternative is to send an IPI request/response messages in both directions between the PLM and an application CPU and this is referred to as bidirectional communication for this prototype. The RPU application and PLM are created for bidirectional communication while the APU Linux application and PLM are created for unidirectional communication in either direction.

Independent trip counters are utilized between the PLM and the APU/RPU core to illustrate the messaging between CPUs. Each Application increments and displays the trip counter. The prototype performs communication at a 100 ms rate such that the application CPU can display a one second time using the trip counter. This allows the user to easily see the prototype working.

Security Considerations

The PLM is secure running on the PMC while Linux applications on the A72 CPUs are typically non-secure. A non-secure application like Linux which sends IPI messages to the PLM can illustrate methods that may be used for nefarious activities such as shutting down nodes in the system. Users should carefully evaluate security needs while adding an IPI message from a non-secure application.

Tools Details

The following prototype has been built and tested with Xilinx 2022.1 tools (Vitis, PetaLinux, Vivado). Minor changes might be required for other versions.

Source Code

The source code for this prototype is available in the plm-r5-a72-ipi directory of the software-prototypes project at .

Building The Prototype

The prototype requires a number of infrastructure components including a Vivado project, a PetaLinux project and multiple Vitis projects. The Vivado project is required only for selecting the IPI for the R5 CPU. The PetaLinux project must have libmetal in the kernel and rootfs along with a sysroot for the Linux application in Vitis. The Vitis projects include the PLM, the R5 application and the A72 Linux application. The following subparagraphs provide guidance for the provided source code and patches.

PLM

The XilPLMI Library in the PLM needs to be altered for a new user module. The patch in the xplmi directory is applied to the xlnx_rel_v2022.1 branch of the embeddedsw repository at . The PLM build should use the patched version of the xilplmi library.

The PLM source code is altered to add the new user_ipi_demo.* source files and is then rebuilt.

RPU

The RPU application is a Vitis built project based on the Vivado project and the provided source code.

APU Linux

The A72 Linux application is a Vitis built project with the PetaLinux sysroot to support the libmetal library and the provided source code.

Prebuilt Images

A set of bootable prebuilt images (built with the 2022.1 tools release) for the VCK190 are also included in the repository such that the prototype can run easily.

Running Prototype

The RPU application starts printing very early during boot and then Linux boots. Note that the applications share the UART and the timing generally avoids conflicts which cause garbled data. The Linux application is manually started from the SD card. The seconds displayed is based on the trip counter divided down to a seconds rate while the payload shows the actual trip counter being received in the IPI messages. The following illustrates typical output from the applications.

metal: info: metal_linux_dev_open: checking driver vfio-platform,ff3f0000.ipi_buf,(null) metal: info: metal_uio_dev_open: No IRQ for device ff3f0000.ipi_buf. metal: info: metal_linux_dev_open: checking driver vfio-platform,ff360000.ipi,(null) On R5: Seconds: 68 Payload: 1360 On A72: Seconds: 1: Payload: 20 On R5: Seconds: 69 Payload: 1380 On A72: Seconds: 2: Payload: 40 On R5: Seconds: 70 Payload: 1400 On A72: Seconds: 3: Payload: 60 On R5: Seconds: 71 Payload: 1420 On A72: Seconds: 4: Payload: 80 On R5: Seconds: 72 Payload: 1440

PLM Software Architecture

The PLM software is described in the Versal Adaptive SoC System Software Developers Guide (UG1304) at . The following subparagraphs are only intended to give a brief overview with details required for this prototype.

Libraries and Modules

The PLM software is divided into a number of modules and libraries to allow reuse of the PLM across a number of Versal variants. Users can add their own module as demonstrated in this prototype.

XilPLMI Library

The XilPLMI library provides a lot of the infrastructure of the PLM software. Module, command and IPI message processing is provided by the library. This library is altered for the prototype to add a new module.

PLM Tasks

The PLM software is architected with a run to completion single threaded design such that users must recognize this design pattern in any changes as demonstrated in this prototype. Adding additional run-time for new processing can take time from existing processing. Higher layers, such as Linux, acting as a client to the PLM server are affected by a slow responding PLM such that the entire system including Linux can be slow or hang. The PLM software utilizes a periodic time slice based scheduler which schedules time based tasks as a layer above the basic task scheduler. A ten millisecond time slice is utilized for scheduling.

PLM IPI Messages

The PLM software is a server which receives commands over IPI messaging from the client CPUs. The PLM expects any received IPI messages to be in a specific format. The format is not documented outside of the source code. The following diagram illustrates the IPI message format for the PLM to process received messages.

A more significant change would be required to accept other message formats. As a result, adhering to the format is the best way to minimize impact to the PLM. When sending IPI request messages from the PLM, they can be any format as only the lower layer IPI messaging APIs of the XPlmi library are used.

PLM Changes

Sending IPI Messages

Sending IPI messages from the PLM as the IPI message source, eliminates the need to format messages to match the PLM IPI protocol. Sending an IPI message request includes waiting for a response message from the destination CPU which can take some time. The PLM prototype illustrates the use of a periodic task to send an IPI message to a destination CPU and wait for a response. The periodic task must be designed for run to completion to avoid blocking, which will stop the PLM from doing other tasks.

The UserTask function in the source file user_ipi_demo.c in the PLM prototype illustrates this concept using a Waiting flag to maintain state across executions while not blocking as it waits for a response from the other CPU.

Slowing down other PLM tasks can make other CPU nodes non-responsive as they wait for PLM requests to complete. The UserTask function is a periodic task, which needs to run at a faster periodic rate than the desired IPI message rate to compensate for the run to completion design.

Adding A User Module

The following section describes the most significant changes when adding a new module, with the fine details being left to the source files.

Setting IPI Masks In CIPS

A key area to understand in the IPI hardware and the PLM software is the IPI mask. An IPI mask in the PLM user module is a constant that defines a specific IPI corresponding to a CPU in the IPI hardware and corresponds to the bit of the IPI registers.
Each IPI is configured and assigned to an IPI “Master”, such as the APU/RPU based cores, and this is accomplished in the CIPS wizard in Vivado Design Suite. By default the R5s are not setup to use any IPIs in Vivado and the following illustration shows “R5 0” assigned to “IPI 1”.

The following code from the prototype source file user_ipi_demo.c illustrates using the A72 with “IPI 3” and the R5 with “IPI 1”. The masks correlate to the bits of the IPI registers as shown in the below code example.

#define A72_IPI_MASK 0x20 // IPI 3 #define R5_IPI_MASK 0x8 // IPI 1

XilPLMI Library

Adding a user module allows a clean minor change to be incorporated, minimizing the effort to merge the change into newer PLM releases. The user module is in a separate source file. The XilPLMI library xplmi_module.h source file is altered to add the new module identifier (XPLMI_MODULE_DEMO_ID) and increase the number of modules (XPLMI_MAX_MODULES) supported. At this time each release of the PLM can utilize a different number of modules. There are no module IDs reserved for users but that should change in future PLM releases.

#define XPLMI_MAX_MODULES (14U) #define XPLMI_MODULE_USER_ID (13U)

PLM Application

New source files, user_ipi_demo.c and user_ipi_demo.h, are added to the PLM application. The new C source files contain most of the changes with only minor changes to existing source files of the PLM.

Adding New Module Command Processing

The following code illustrates how to add the new user module infrastructure to support reception of IPI messages which are formatted as commands to the PLM. Commands for the R5 and the A72 are added to the API identifiers (ID) of the module followed by the XPlmi_ModuleCmd data type array of user commands. Finally, the new XPlmi_Module data type for the UserModule is created. The R5 and A72 commands allow each corresponding RPU/APU application to send unique commands to the PLM using IPI communication.

The following code illustrates adding a new function which will be called by the PLM framework to handle commands received for the new user module using IPI messages. Note that the XPlmi_Cmd data type provides access to the command ID and the payload of the command.

New Module Initialization

The following code illustrates adding a new initialization function for the user module named User_Init(). This initialization function is called from a higher level PLM function and is responsible for connecting the user module into the PLM application in a manner such that commands for the module are handled by the PLM architecture and passed on to the user module processing. Note that the handler for each command is initialized to the ProcessCmd() function which is called by the PLM architecture for any command destined for this module from another CPU such as the R5 or A72.

Integrating The New Module

The new module initialization is called from the PLM application. The following code illustrates a minor change to the XPlm_ModuleInit() in the xplm_module.c source file.

Adding A Periodic Task Details

A new function is added, UserTask(), which is called from the PLM scheduler at a periodic rate.

A periodic task is added to the user module which runs the function UserTask() every 10 milliseconds. Note that this time along with MSG_PERIOD_MS might need to be larger when there is other critical processing to be performed by the PLM. The XPLMI_MODULE_DEMO_ID is defined in the XilPLMI library change. The function to add the periodic task is called in the user module initialization function User_Init(). Note that this API has changed across PLM releases such that there might need to be adjustments to the function call.

XPLMI IPI Messaging

The PLM application uses the XilPLMI library for IPI communication. The following code illustrates an example of sending an IPI request message from the PLM as the IPI source to the R5 as the destination CPU. The message is written into the IPI hardware buffer by one API call and then the message is sent (triggered) by a second API call.

The following code illustrates the PLM waiting for a response from the A72 after sending a request to it. The first API call uses no blocking as this is called from a periodic task in the PLM which should be not wait for the destination CPU response for some indeterminate amount of time. The XPlmi_IpiRead() function is using the XIPIPSU_BUF_TYPE_RESP to indicate it is reading the IPI response buffer rather than the IPI request buffer. Note that an interrupt is generated when the source CPU triggers the IPI message to the destination CPU, but only one interrupt is generated and the source CPU polls for when the destination CPU has completed the processing of the request and inserted a response into the IPI response buffer.

R5 Application Details

The RPU application uses the xilmailbox library to simplify IPI communication. There are alternative methods available such as using libmetal or using the IPI driver directly. The application is based on the mailbox example xilmailbox_example.c source file in the xilmailbox library provided by Vitis and the embeddedsw repository. The example application from the mailbox library is designed for the standalone BSP (bare metal) but it also runs easily in a thread of a FreeRTOS environment.

The prototype was tested by inserting the R5 application into the boot.bin Boot PDI such that the PLM loads the R5 image and starts it. The R5 application output to the UART starts early and continues while Linux is booting on the A72.

R5 IPI Message Sending

The following code illustrates sending an IPI message to the PLM from the R5 using the mailbox library.

R5 IPI Message Receiving

The following code illustrates receiving an IPI message request from the PLM using the mailbox library. The code assumes that mailbox initialization with receive and error handlers has been done prior to this code such that the handlers alter global variables ErrorStatus and RecvDone when an IPI message is received.

A72 Linux Application

The A72 Linux application uses the libmetal library provided by Vitis to ease the use of the IPI messaging for Linux applications. The application is based on the libmetal example provided in the libmetal repository at . Libmetal under Linux uses the UIO kernel driver to allow access to the IPI hardware from the user space.

Linux Device Tree Updates

The UIO kernel driver requires device tree nodes describing the IPI hardware. The following device tree snippet maps to the IPI registers at IPI 3 and the IPI buffers.

Linux Application Details

The application supports two modes of operation with respect to IPI communication. The first mode uses the PLM as the source of IPI communication and the A72 as the destination as this has fewer security implications because the PLM/PMC is easily secured. The second mode uses the A72 as the source of IPI communications and the PLM as the destination. In the second mode of operation there are no IPI interrupts being received on the A72. Both modes of operation, considered unidirectional for this prototype, utilize a single IPI message which includes a request message and a response message.

Linux IPI Message Sending

The following code illustrates sending a message with libmetal from the A72 as the source of the message to the PLM as the destination. Polling for completion of the command response can be done without any timeout as this application is running on Linux.

Linux IPI Message Receiving

The following simplified code illustrates receiving an IPI message in a libmetal interrupt handler.

Debug Tips

Linux Application

IPI Interrupts

The Linux application which is receiving IPI interrupts as the destination of IPI communication from the PLM should show the IPI interrupts when viewing /proc/interrupts from the Linux console. This can help to verify that the system is set up correctly.

Metal Warnings

The libmetal library gives some warnings that are expected and normal and can be ignored as there is no interrupt associated with the IPI buffers.

Metal Errors

When the device tree changes are not included in the Linux kernel device tree, the libmetal library will give errors indicating the memory for the IPI hardware could not be mapped.

PLM Tips

The XilPLMI library in Vitis is used by the PLM for IPI communication. The library must be rebuilt to add a new module for this prototype. A source repository in Vitis is a method to override the Vitis provided default library. The PLM BSP in Vitis must be regenerated and rebuilt after adding the source repository in Vitis.

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy