OpenAMP 2020.2
This page is intended to complement UG1186 "LibMetal and OpenAMP User Guide" for Zynq-7000 and Zynq UltraScale+ MPSoC.
Table of Contents
Getting Started with the Pre-Built Images
Here are the basic steps to boot Linux and run an OpenAMP application using pre-built images.
e.g for ZCU102:
The echo-test application sends packets from Linux running on quad-core Cortex-A53 to a single Cortex-R5 core within the Cortex-R5 cluster running FreeRTOS which sends them back.
Extract the files to boot Linux from u-boot prompt rom a pre-built PetaLinux BSP. Ensure that openamp.dtb file is used as the system.dtb does not have OpenAMP R5 Remoteproc device tree nodes.
host shell$ tar xvf xilinx-zcu102-v2020.2-final.bsp --strip-components=4 --wildcards */Image */openamp.dtb */rootfs.cpio.gz.u-boot host shell$ petalinux-boot --jtag --prebuilt 2 --hw_server-url <hostname:3121> # boot to u-boot prompt on the board
Note: Alternatively, if you already created a PetaLinux project with a provided BSP for your board, pre-built images can also be found under the <your project>/pre-built/linux/images/ directory.
Make sure you have configured TFTP server in host.
For more information on booting Linux with Petalinux see https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_2/ug1144-petalinux-tools-reference-guide.pdf
Set the server IP address to the host IP address using the following command at UBoot prompt:
setenv serverip <HOST IP ADDRESS>
Read the images and then boot the images using following commands:
... u-boot> tftpb <dtb load address> openamp.dtb u-boot> tftpb <kernel load address> Image u-boot> tftpb <rootfs load address> rootfs.cpio.gz.u-boot u-boot> booti <kernel load address> <rootfs load address> <device tree load address> ...
At Linux login prompt enter 'root' for user and 'root' for password and run echo-test demo
plnx_aarch64 login: root Password: root@plnx_aarch64:~# echo image_echo_test > /sys/class/remoteproc/remoteproc0/firmware root@plnx_aarch64:~# echo start > /sys/class/remoteproc/remoteproc0/state [ 177.375451] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc [ 177.384705] remoteproc remoteproc0: Booting fw image image_echo_test, size 644144 [ 177.396832] remoteproc remoteproc0: registered virtio0 (type 7) [ 177.399108] virtio_rpmsg_bus virtio0: rpmsg host is online [ 177.412370] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM. [ 17Starting application... Try to init remoteproc resource Init remoteproc resource succeeded Waiting for events... 7.422089] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up [ 177.442121] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x0 root@plnx_aarch64:~# echo_test Echo test start Open rpmsg dev!
Known Issues
Libmetal Warnings
For applications using LibMetal on Linux, there might be expected output of "reading attribute yields result ...". This is expected if the Linux device tree node representing the address space of the device does not have properties for its size, address and virtual address. This is not an error and instead just additional output to inform the user. The information can still be extrapolated from the device tree node's reg property. A more in-depth explanation is as follows:
If a device that is to be used for LibMetal on Linux OS does NOT already have the the following fields of: "offset", "addr", and "size" as properties in its device tree node then the Libmetal memory (IO Region) will do the following:
- The attributes that are missing will be reported as such in the output.
- The offset, addr, size properties will be extrapolated from the device tree nodes in a format documented in the UG1186 and wiki as they have the physical address expressed in the reg property of the device tree node and a resulting virtual address when it is mapped in later on. For example the below device tree node from UG1186:
shm0: shm@3ed80000 { compatible = "shm_uio"; reg = <0x0 0x3ed80000 0x0 0x1000000>; };
This will raise the warning that the attributes for offset, addr, size are not present. That being said the device can still be opened and initialized properly as the virtual address, size, and physical address will still be extrapolated in the LibMetal stack. The fields have the following meanings:
Field | Meaning |
---|---|
Offset | Offset into the memory region. |
Addr | Physical base address of the memory region. |
size | Size of the memory region |
Below is sample output with the "reading attribute yields result" warning from running the demo
$ libmetal_amp_demo registering: 0, name=ff340000.ipi registering: 1, name=3ed80000.shm registering: 2, name=ff0E0000.ttc SERVER> ====== LibMetal demo: shared memory ====== SERVER> Wait for shared memory demo to start. SERVER> Wait for shared memory demo to start.libmetal_amp_demo metal: info: Registered shmem provider linux_shm metal: info: Registered shmem provider linux_shm SERVER> Demo has started. SERVER> Shared memory test finished SERVER> ====== Libmetal demo: atomic operation over shared memory ====== SERVER> Starting atomic add on shared memory demo. . . metal: info: Registered shmem provider ion.reserved. metal: info: Registered shmem provider ion.ion_system_contig_heap. metal: info: Registered shmem provider ion.ion_system_heap. CLIENT> ****** Libmetal demo: shared memory ****** CLIENT> ****** Libmetal demo: shared memory ****** metal: info: metal_linux_dev_open: checking driver vfio-platform,3ed80000.shm,(null) metal: warning: reading attribute /sys/devices/platform/amba/3ed80000.shm/uio/uio0/maps/map1/offset yields result -22 metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm.
Configuration Information of Note
Below is of note OpenAMP IPC use cases only. Firmware loading and Libmetal use cases do not involve this configuration information.
For ZynqMP and Versal platforms that are using OpenAMP R5 remoteproc kernel driver running on Linux in Cortex A cluster, ensure that Kernel Config option SPARSE_VMEMMAP is enabled. The reasoning is as follows:
If CONFIG_SPARSEMEM_VMEMMAP is not set, then kernel will try to find a page to get the physical page frame number from.
In other words the OpenAMP-related device tree has DDR memory spaces for IPC carved out such that the kernel does not map these in (hence these should be in the reserved-memory node). Once these are carved out on the A-cluster, then the pages that are needed without CONFIG_SPARSEMEM_VMEMMAP are not present. E.g. the pfn's are not present.
So to make sure that the pages for these reserved-mem nodes can be found, enable CONFIG_SPARSEMEM_VMEMMAP so that vmemmap will be called and subsequently the PFN's that contain these carveouts can be found.
For more documentation on the memory model see on the kernel config option SPARSE_VMEMMAP see https://www.kernel.org/doc/Documentation/vm/memory-model.rst
The above is not necessarily required for LibMetal applications as LibMetal does not require the use of remoteproc for elf loading. That being said, the above configuration information is required for LibMetal applications if and only if the R5 remoteproc driver is used for elf loading.
Docs and source code
URLs to source code
Xilinx OpenAMP and LibMetal related code
The following locations provide access to the code:
- OpenAMP Library and Demonstration code
- LibMetal Library and Demonstration code
- Yocto recipe to build OpenAMP and LibMetal
- Xilinx version of Linux kernel
- RPU baremetal and FreeRTOS source code used in XSDK and XSCT
Additional examples
ZynqMP Linux Master running on APU with RPMsg in kernel space and one RPU slave.
When running with RPU in split mode and only one RPU is an OpenAMP slave, the second RPU can still run another non-OpenAMP application.
Firmware
Guidance for various configurations:
Target Remote Processor | Configuration Information |
---|---|
RPU 0 | use default application in PetaLinux BSP: build with petalinux-build -c openamp-fw-echo-testd |
RPU 1 | In PetaLinux project, modify <plnx proj root>/components/yocto/layers/meta-xilinx-tools/recipes-openamp/examples/openamp-fw.inc : XSCTH_PROC_zynqmp ?= "psu_cortexr5_0" to "psu_cortexr5_1" |
Device Tree
Append the following to <plnx proj root>/ project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi
/ { reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges; rpu0vdev0vring0: rpu0vdev0vring0@3ed40000 { no-map; reg = <0x0 0x3ed40000 0x0 0x4000>; }; rpu0vdev0vring1: rpu0vdev0vring1@3ed44000 { no-map; reg = <0x0 0x3ed44000 0x0 0x4000>; }; rpu0vdev0buffer: rpu0vdev0buffer@3ed48000 { no-map; reg = <0x0 0x3ed48000 0x0 0x100000>; }; rproc_0_reserved: rproc@3ed00000 { no-map; reg = <0x0 0x3ed00000 0x0 0x40000>; }; }; zynqmp-rpu { compatible = "xlnx,zynqmp-r5-remoteproc-1.0"; #address-cells = <2>; #size-cells = <2>; ranges; core_conf = "split"; reg = <0x0 0xFF9A0000 0x0 0x10000>; r5_0: r5@0 { #address-cells = <2>; #size-cells = <2>; ranges; memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>; pnode-id = <0x7>; mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>; mbox-names = "tx", "rx"; tcm_0_a: tcm_0@0 { reg = <0x0 0xFFE00000 0x0 0x10000>; pnode-id = <0xf>; }; tcm_0_b: tcm_0@1 { reg = <0x0 0xFFE20000 0x0 0x10000>; pnode-id = <0x10>; }; }; /* if instead for RPU1 use the following: r5_1: r5@1 { #address-cells = <2>; #size-cells = <2>; ranges; memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>; pnode-id = <0x8>; mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>; mbox-names = "tx", "rx"; tcm_a: tcm@0 { reg = <0x0 0xFFE90000 0x0 0x10000>; pnode-id = <0x11>; }; tcm_b: tcm@1 { reg = <0x0 0xFFEb0000 0x0 0x10000>; pnode-id = <0x12>; }; }; */ }; zynqmp_ipi1 { compatible = "xlnx,zynqmp-ipi-mailbox"; interrupt-parent = <&gic>; interrupts = <0 29 4>; xlnx,ipi-id = <7>; #address-cells = <1>; #size-cells = <1>; ranges; /* APU<->RPU0 IPI mailbox controller */ ipi_mailbox_rpu0: mailbox@ff990600 { reg = <0xff990600 0x20>, <0xff990620 0x20>, <0xff9900c0 0x20>, <0xff9900e0 0x20>; reg-names = "local_request_region", "local_response_region", "remote_request_region", "remote_response_region"; #mbox-cells = <1>; xlnx,ipi-id = <1>; }; }; zynqmp_ipi2 { compatible = "xlnx,zynqmp-ipi-mailbox"; interrupt-parent = <&gic>; interrupts = <0 30 4>; xlnx,ipi-id = <8>; #address-cells = <1>; #size-cells = <1>; ranges; /* APU<->RPU1 IPI mailbox controller */ ipi_mailbox_rpu1: mailbox@ff990640 { reg = <0xff3f0b00 0x20>, <0xff3f0b20 0x20>, <0xff3f0940 0x20>, <0xff3f0960 0x20>; reg-names = "local_request_region", "local_response_region", "remote_request_region", "remote_response_region"; #mbox-cells = <1>; xlnx,ipi-id = <2>; }; }; };
The memory sections in the below sample described in the reserved-memory node that are then shown in the memory-region property of the remoteproc node are needed for either sections where the ELF binary is loaded into or that the binary uses at runtime for OpenAMP-related IPC communication.
In this case rpu0vdev0buffer@3ed48000 range of 0x3ed48000 - 0x3ee48000 is not described in the linker script but is used at runtime for shared buffers. As such this range should be described in the reserved-memory node so that it is not unduly mapped in by the kernel. Similarly for rpu0vdev0vring0@3ed40000 and rpu0vdev0vring1@3ed44000
As the default linker script for OpenAMP applications running on a single Cortex-R5 core use the DDR address range of 0x3ed00000 to 0x3ed40000 is used as a section to load the Cortex-R5 ELF binary, this also needs to be described in the reserved-memory node so that the ELF can run there and not overwrite memory that would ordinarily be mapped into the kernel.
In addition to the nodes described in reserved-memory the Cortex-R5 has TCM nodes with their own memory ranges that are coupled with each of the Cortex-R5 cores. As such, the memory ranges for each are described as they might be used for ELF loading with the corresponding R5 remoteproc node that these TCM nodes are respectively coupled with.
If you wish to run concurrent slaves, ensure that memories used do not overlap as well as IPI usage.
ZynqMP RPU to manage Linux
Please note that this example is release specific.
RPU slave applications presently are only supported by default to run in TCM. What this means in practice is that for RPU to load and start other RPU, the entirety of the slave application must be loaded and run in TCM. APU remoteproc slave does support running application in DDR.
Create R5-0 standalone BSP
To build either Libmetal or OpenAMP for a non-Linux target, the libraries and applications will require some Xilinx BSP specific information. This is provided by build the Xilinx Cortex-R5-0 BSP
NOTE: make sure that Cortex-R5 BSP has XilPM and GIC software components built. XilPM is used to interface with the ZU+ PMUFW to bring up RPU. The GIC is needed as IPIs are the mechanism by which to communicate with the XilPM framework.
Build Libmetal
To build the OpenAMP application a Xilinx Cortex-R5 BSP and the Libmetal library are needed. This section details how to build the Libmetal library.
Libmetal is built using CMake. An example toolchain file is below:
set (CMAKE_SYSTEM_PROCESSOR "arm" CACHE STRING "") set (MACHINE "zynqmp_r5" CACHE STRING "") set (CROSS_PREFIX "armr5-none-eabi-" CACHE STRING "") set (CMAKE_C_FLAGS "-mfloat-abi=hard -mcpu=cortex-r5 -mfpu=vfpv3-d16 -Wall -Werror -Wextra \ -flto -Os -I<path to bsp>/bsp/psu_cortexr5_0/include" CACHE STRING "") link_directories( <path to bsp>/bsp/psu_cortexr5_0/lib ) set (PLATFORM_LIB_DEPS " -lxil -lxilstandalone -lc -lm -lxilpm " CACHE STRING "") SET(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -flto") SET(CMAKE_AR "gcc-ar" CACHE STRING "") SET(CMAKE_C_ARCHIVE_CREATE "<CMAKE_AR> qcs <TARGET> <LINK_FLAGS> <OBJECTS>") SET(CMAKE_C_ARCHIVE_FINISH true) include (cross-generic-gcc)
Script to build Libmetal
git clone https://github.com/openamp/libmetal.git cd libmetal mkdir build_r5 cd build_r5 cmake .. -DCMAKE_TOOLCHAIN_FILE=<toolchain_file> \ -DCMAKE_LIBRARY_PATH=<path to bsp>/bsp/psu_cortexr5_0/lib make DESTDIR=. install VERBOSE=1
Build OpenAMP library and application
OpenAMP is built using CMake. An example toolchain file is below:
set (CMAKE_SYSTEM_PROCESSOR "arm" CACHE STRING "") set (MACHINE zynqmp_r5 ) set (CROSS_PREFIX "armr5-none-eabi-" CACHE STRING "") set (CMAKE_C_FLAGS "-mfloat-abi=hard -mcpu=cortex-r5 -Os -flto -mfpu=vfpv3-d16 -DUNDEFINE_FILE_OPS \ -I<path to libmetal repo>/libmetal/build_r5/usr/local/include \ -I<bsp path>/bsp/psu_cortexr5_0/include" CACHE STRING "") set (CMAKE_ASM_FLAGS " -mcpu=cortex-r5 " CACHE STRING "") set (PLATFORM_LIB_DEPS " -lxil -lxilstandalone -lxilpm -lxilmem -lc -lm" CACHE STRING "") SET(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -flto ") SET(CMAKE_AR "gcc-ar" CACHE STRING "") SET(CMAKE_C_ARCHIVE_CREATE "<CMAKE_AR> qcs <TARGET> <LINK_FLAGS> <OBJECTS>") SET(CMAKE_C_ARCHIVE_FINISH true) link_directories( <path to libmetal repo>/libmetal/build_r5/usr/local/include/ <path to libmetal repo>/libmetal/build_r5/usr/local/lib/ ) set (WITH_LOAD_FW ON) set (CMAKE_FIND_ROOT_PATH <path to LibMetal repo>/libmetal/build/usr/local/lib <bsp path>/bsp/psu_cortexr5_0/lib ) include (cross_generic_gcc) string (TOLOWER "FreeRTOS" PROJECT_SYSTEM) string (TOUPPER "FreeRTOS" PROJECT_SYSTEM_UPPER) # vim: expandtab:ts=2:sw=2:smartindent
Srcipt using toolchain file for OpenAMP
git clone https://github.com/openamp/open-amp.git cd open-amp mkdir build cd build cmake .. \ -DCMAKE_TOOLCHAIN_FILE=toolchain \ -DCMAKE_INCLUDE_PATH="<path to LibMetal repo>/libmetal/build_r5/lib/include/;<path to bsp>/bsp/psu_cortexr5_0/include/" \ -DCMAKE_LIBRARY_PATH="<path to LibMetal repo>/libmetal/build_r5/usr/local/lib/<path to bsp>/bsp/psu_cortexr5_0/lib/" -DWITH_APPS=on -DWITH_LOAD_FW=ON make DESTDIR=$(pwd) install VERBOSE=1
Note: how to set remoteproc elf-load slave
- If RPU1 is slave, add thef ollowing after -DWITH_LOAD_FW=ON, add -DLOAD_FW_TARGET=NODE_RPU_1
- If APU is slave, add thef ollowing after -DWITH_LOAD_FW=ON, add -DLOAD_FW_TARGET=NODE_APU_1
Boot ZynqMP OpenAMP Cortex-R5 application on hardware
To boot up to u-boot on target zcu102 see the following directions:
In XSDB do the following:
- Configure Cortex-R5 to be in split mode if using single Cortex-R5 as remoteproc slave
- Reset TCM
- Load remoteproc slave application into base address (default 0x3ed00000)
- Load binary for Cortex-R5 Remoteproc master (in this case Cortex-R5-0)
- Start Cortex-R5-0
WARNING: Do not reset remoteproc slave processor as this introduces issue when using PM library for lifecycle management
below is what the XSDB commands look like on 1 example board.
ta 6 # set r5 to split mwr 0xFF9A0000 0x08 # reset tcm ta 7 rst -processor mwr 0xFFE00000 0 10000 after 1000 mwr 0xFFE20000 0 10000 after 1000 # load apu dow -data <a53 app> 0x3ed00000 # load rpu ta 6 dow load_fw.out # start rpu con
Output on console
Loading Exectuable Demo metal: debug: registered generic bus RPU0: Running in split mode apu_rproc_init: node id: 403750915 Start to load executable with remoteproc_load() metal: debug: remoteproc_load: check remoteproc status metal: debug: remoteproc_load: open executable image metal: debug: remoteproc_load: check loader metal: debug: remoteproc_load: loading headers metal: debug: Loading ELF headering metal: debug: Loading ELF program header. metal: debug: Loading ELF section header. metal: debug: remoteproc_load, load header 0x0, 0x100, next 0x8ca30, 0x540 mem_image_load: offset=0x8CA30, size=0x540 metal: debug: Loading ELF section header. metal: debug: Loading ELF section header complete. metal: debug: Loading ELF shstrtab. metal: debug: remoteproc_load, load header 0x8ca30, 0x540, next 0x8c958, 0xd6 mem_image_load: offset=0x8C958, size=0xD6 metal: debug: Loading ELF shstrtab. metal: debug: remoteproc_load, load header 0x8c958, 0xd6, next 0x8c958, 0x0 metal: debug: remoteproc_load: load executable data metal: debug: segment: 1, total segs 2 metal: debug: load data: da 0xfffc0000, offset 0x10000, len = 0xd7b0, memsize = 0x15000, state 0x10801 apu_rproc_mmap: pa=0xFFFFFFFF, da=0xFFFC0000, size=0x15000, atrribute=0x0 RPU0: XPm_RequestNode(18314007, 1, 0, 1, 0) RPU0: XPm_RequestNode(18314008, 1, 0, 1, 0) mem_image_load: offset=0x10000, size=0xD7B0 metal: debug: cannot find more segment metal: debug: load data: da 0x0, offset 0x0, len = 0x0, memsize = 0x0, state 0x10802 apu_rproc_mmap: pa=0xFFFFFFFF, da=0x0, size=0x0, atrribute=0x0 RPU0: XPm_RequestNode(18320010, 1, 0, 1, 0) metal: debug: cannot find more segment metal: debug: load data: da 0xffffffff, offset 0x0, len = 0x0, memsize = 0x0, state 0x10802 metal: debug: remoteproc_load: successfully load firmware RPU0: XPm_RequestWakeUp(1810C003, FFFC0001, 0, 1, 0) successfully started the processor NOTICE: ATF running on Xilinx Silicon
© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy