OpenAMP 2021.1

This page is intended to complement  UG1186 "LibMetal and OpenAMP User Guide"  for Zynq-7000, Zynq UltraScale+ MPSoC and Versal ACAP.

Table of Contents

Getting Started with the Pre-Built Images

Here are the basic steps to boot Linux and run an OpenAMP application using pre-built images.

e.g for ZCU102:
The echo-test application sends packets from Linux running on quad-core Cortex-A53 to a single Cortex-R5 core within the Cortex-R5 cluster running FreeRTOS which sends them back.

Extract the files to boot Linux from u-boot prompt rom a pre-built PetaLinux BSP.  Ensure that openamp.dtb file is used as the system.dtb does not have OpenAMP R5 Remoteproc device tree nodes.

host shell$ tar xvf xilinx-zcu102-v2021.1-final.bsp --strip-components=4 --wildcards  */Image */openamp.dtb */rootfs.cpio.gz.u-boot
host shell$ petalinux-boot --jtag --prebuilt 2 --hw_server-url <hostname:3121> # boot to u-boot prompt on the board

Note: Alternatively, if you already created a PetaLinux project with a provided BSP for your board, pre-built images can also be found under the <your project>/pre-built/linux/images/ directory.

Make sure you have configured TFTP server in host.

For more information on booting Linux with Petalinux see https://www.xilinx.com/support/documentation/sw_manuals/xilinx2021_1/ug1144-petalinux-tools-reference-guide.pdf

Set the server IP address to the host IP address using the following command at UBoot prompt:

setenv serverip <HOST IP ADDRESS>

Read the images and then boot the images using following commands:

...
u-boot> tftpb <dtb load address> openamp.dtb 
u-boot> tftpb <kernel load address> Image
u-boot> tftpb <rootfs load address> rootfs.cpio.gz.u-boot
u-boot> booti <kernel load address> <rootfs load address> <device tree load address>
...

At Linux login prompt enter 'root' for user and 'root' for password and run echo-test demo

plnx_aarch64 login: root
Password:
root@plnx_aarch64:~# echo image_echo_test > /sys/class/remoteproc/remoteproc0/firmware
root@plnx_aarch64:~# echo start > /sys/class/remoteproc/remoteproc0/state  
[  177.375451] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc
[  177.384705] remoteproc remoteproc0: Booting fw image image_echo_test, size 644144
[  177.396832] remoteproc remoteproc0: registered virtio0 (type 7)
[  177.399108] virtio_rpmsg_bus virtio0: rpmsg host is online
[  177.412370] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM.
[  17Starting application...
Try to init remoteproc resource
Init remoteproc resource succeeded
Waiting for events...
7.422089] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up
[  177.442121] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x0
root@plnx_aarch64:~# echo_test
 Echo test start
 Open rpmsg dev!

Release Notes


Known Issues

Libmetal Warning Output

For applications using LibMetal on Linux, there might be expected output of "reading attribute yields result ...". This is expected if the Linux device tree node representing the address space of the device does not have properties for its size, address and virtual address. This is not an error and instead just additional output to inform the user. The information can still be extrapolated from the device tree node's reg property. A more in-depth explanation is as follows:

If a device that is to be used for LibMetal on Linux OS does NOT already have the the following fields of: "offset", "addr", and "size" as properties in its device tree node then the Libmetal memory (IO Region) will do the following:

  1. The attributes that are missing will be reported as such in the output.
  2. The offset, addr, size properties will be extrapolated from the device tree nodes in a format documented in the UG1186 and wiki as they have the physical address expressed in the reg property of the device tree node and a resulting virtual address when it is mapped in later on. For example the below device tree node from UG1186:
shm0: shm@3ed80000 {   
  compatible = "shm_uio";
  reg = <0x0 0x3ed80000 0x0 0x1000000>; 
};

This will raise the warning that the attributes for offset, addr, size are not present. That being said the device can still be opened and initialized properly as the virtual address, size, and physical address will still be extrapolated in the LibMetal stack. The fields have the following meanings:

FieldMeaning
OffsetOffset into the memory region.
AddrPhysical base address of the memory region.
sizeSize of the memory region

Below is sample output with the "reading attribute yields result" warning from running the demo

$ libmetal_amp_demo
registering: 0, name=ff340000.ipi
registering: 1, name=3ed80000.shm
registering: 2, name=ff0E0000.ttc
SERVER> ====== LibMetal demo: shared memory ======
SERVER> Wait for shared memory demo to start.
SERVER> Wait for shared memory demo to start.libmetal_amp_demo
metal: info:      Registered shmem provider linux_shm
metal: info:      Registered shmem provider linux_shm
SERVER> Demo has started.
SERVER> Shared memory test finished
SERVER> ====== Libmetal demo: atomic operation over shared memory ======
SERVER> Starting atomic add on shared memory demo.
.
.
metal: info:      Registered shmem provider ion.reserved.
metal: info:      Registered shmem provider ion.ion_system_contig_heap.
metal: info:      Registered shmem provider ion.ion_system_heap.
CLIENT> ****** Libmetal demo: shared memory ******
CLIENT> ****** Libmetal demo: shared memory ******
metal: info:      metal_linux_dev_open: checking driver vfio-platform,3ed80000.shm,(null)
metal: warning:   reading attribute /sys/devices/platform/amba/3ed80000.shm/uio/uio0/maps/map1/offset yields result -22
metal: info:      metal_uio_dev_open: No IRQ for device 3ed80000.shm.

Remoteproc ELF Loading with FreeRTOS RPU Applications

The Xilinx FreeRTOS port on the Cortex-R5 core uses TTC by default. The TTC is put in reset by the PM framework. As a workaround, you will need to request for the TTC node (PM_DEV_TTC_0 (0x18224024U)) using the SysFS interface.

The list of device nodes and their respective values can be found in the Versal Software Developers Guide UG1304 here https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_1/ug1304-versal-acap-ssdg.pdf#page=217

root@xilinx-vmk180-2021_1:~# echo pm_request_node 0x18224024 0x7 0x64 0 > /sys/kernel/debug/zynqmp-firmware/pm
root@xilinx-vmk180-2021_1:~# 
root@xilinx-vmk180-2021_1:~# cd /lib/firmware/                           
root@xilinx-vmk180-2021_1:/lib/firmware# echo freertos-echo-test-r50.elf > /sys/class/remoteproc/remoteproc0/firmware 
root@xilinx-vmk180-2021_1:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc0/state
[  303.211997] remoteproc remoteproc0: powering up ff9a0000.rf5ss:r5f_0
[  303.220647] remoteproc remoteproc0: Booting fw image freertos-echo-test-r50.elf, size 1340964
[  303.450798]  remoteproc0#vdev0buffer: registered virtio0 (type 7)
[  303.451103] remoteproc remoteproc0: remote processor ff9a0000.rf5ss:r5f_0 is now up
root@xilinx-vmk180-2021_1:/lib/firmware# processing():104 Starting application...
0 0 L7 registered generic bus
0 1 L7 init_system():114 buf,len,configTICK_RATE_HZ = 0x3ed201b8,4096,100 
0 2 L6 platform_init():176 platform_create_proc()
0 3 L6 platform_create_proc():111 rsc_table, rsc_size = 0x3ed20000, 0x100
0 4 L7 zynqmp_r5_a53_proc_init():71 metal_device_open(generic, ipi_dev, 0x2740)
0 5 L7 platform_create_proc():119 ipi_{name,bus,chn_mask} = ipi_dev,generic,0x20
0 6 L7 zynqmp_r5_a53_proc_mmap():126 lpa,lda= 0x3ed20000,0xffffffff
0 7 L7 zynqmp_r5_a53_proc_mmap():138 mem= 0x3ed22d78
0 8 L7 zynqmp_r5_a53_proc_mmap():142 tmpio= 0x3ed22db8
0 9 L7 zynqmp_r5_a53_proc_mmap():126 lpa,lda= 0x3ed40000,0xffffffff
0 10 L7 zynqmp_r5_a53_proc_mmap():138 mem= 0x3ed22df8
1 11 L7 zynqmp_r5_a53_proc_mmap():142 tmpio= 0x3ed22e38
1 12 L6 platform_create_proc():148 Initialize remoteproc successfully.
1 13 L6 platform_create_rpmsg_vdev():208 creating remoteproc virtio rproc 0x3ed2012c
2 14 L6 platform_create_rpmsg_vdev():216 initializing rpmsg shared buffer pool
2 15 L6 platform_create_rpmsg_vdev():221 initializing rpmsg vdev
[  303.492633] virtio_rpmsg_bus virtio0: rpmsg host is online
2 16 L6 app():71 Try to create rpmsg endpoint.
3 17 L6 app():81 Successfully created rpmsg endpoint.
[  303.495885] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x0

Configuration Information of Note

Below note is for OpenAMP IPC use cases only.Firmware loading and Libmetal use cases do not involve this configuration information.


For ZynqMP and Versal platforms that are using OpenAMP R5 remoteproc kernel driver running on Linux in Cortex A cluster, ensure that Kernel Config option SPARSE_VMEMMAP is enabled. The reasoning is as follows:

 If CONFIG_SPARSEMEM_VMEMMAP is not set, then kernel will try to find a page to get the physical page frame number from.

In other words the OpenAMP-related device tree has DDR memory spaces for IPC carved out such that the kernel does not map these in (hence these should be in the reserved-memory node). Once these are carved out on the A-cluster, then the pages that are needed without CONFIG_SPARSEMEM_VMEMMAP  are not present. E.g. the Page Frame Numbers (PFN) are not present.

So to make sure that the pages for these reserved-mem nodes can be found, enable CONFIG_SPARSEMEM_VMEMMAP so that vmemmap will be called and subsequently the PFN's that contain these carveouts can be found.

For more documentation on the memory model see on the kernel config option SPARSE_VMEMMAP see https://www.kernel.org/doc/Documentation/vm/memory-model.rst 

The above is not necessarily required for LibMetal applications as LibMetal does not require the use of remoteproc for elf loading. That being said, the above configuration information is required for LibMetal applications if and only if the R5 remoteproc driver is used for elf loading.




Docs and source code

URLs to source code

Xilinx OpenAMP and LibMetal related code

The following locations provide access to the code:

Additional examples

ZynqMP Linux Host running on APU with RPMsg in kernel space and one RPU remote processor.

When running with RPU in split mode and only one RPU is an OpenAMP remote processor, the second RPU can still run another non-OpenAMP application.

Firmware

Guidance for various configurations:

Target Remote ProcessorConfiguration Information
RPU 0

use default application in PetaLinux BSP: build with petalinux-build -c openamp-fw-echo-testd

RPU 1

In PetaLinux project, modify <plnx proj root>/components/yocto/layers/meta-xilinx-tools/recipes-openamp/examples/openamp-fw.inc :  XSCTH_PROC_zynqmp ?= "psu_cortexr5_0" to "psu_cortexr5_1"

Device Tree

Append the following to <plnx proj root>/ project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi

/ {
    reserved-memory {
        #address-cells = <2>;
        #size-cells = <2>;
        ranges;
        rpu0vdev0vring0: rpu0vdev0vring0@3ed40000 {
            no-map;
            reg = <0x0 0x3ed40000 0x0 0x4000>;
        };
        rpu0vdev0vring1: rpu0vdev0vring1@3ed44000 {
            no-map;
            reg = <0x0 0x3ed44000 0x0 0x4000>;
        };
        rpu0vdev0buffer: rpu0vdev0buffer@3ed48000 {
            no-map;
            reg = <0x0 0x3ed48000 0x0 0x100000>;
        };
        rproc_0_reserved: rproc@3ed00000 {
            no-map;
            reg = <0x0 0x3ed00000 0x0 0x40000>;
        };


    };

	tcm_0a@ffe00000 {
		no-map;
		reg = <0x0 0xffe00000 0x0 0x10000>;
		phandle = <0x40>;
		status = "okay";
		compatible = "mmio-sram";
	};

	tcm_0b@ffe20000 {
		no-map;
		reg = <0x0 0xffe20000 0x0 0x10000>;
		phandle = <0x41>;
		status = "okay";
		compatible = "mmio-sram";
	};

	tcm_1a@ffe90000 {
		no-map;
		reg = <0x0 0xffe00000 0x0 0x10000>;
		phandle = <0x42>;
		status = "okay";
		compatible = "mmio-sram";
	};

	tcm_1b@ffeb0000 {
		no-map;
		reg = <0x0 0xffe20000 0x0 0x10000>;
		phandle = <0x43>;
		status = "okay";
		compatible = "mmio-sram";
	};

	rf5ss@ff9a0000 {
		compatible = "xlnx,zynqmp-r5-remoteproc";
		xlnx,cluster-mode = <1>;
		ranges;
		reg = <0x0 0xFF9A0000 0x0 0x10000>;
		#address-cells = <0x2>;
		#size-cells = <0x2>;

		r5f_0 {
			compatible = "xilinx,r5f";
			#address-cells = <2>;
			#size-cells = <2>;
			ranges;
			sram = <0x40 0x41>;
			memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
			power-domain = <0x7>;
			mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
			mbox-names = "tx", "rx";
		};


        /* if instead for RPU1 use the following:
        r5f_1 {
            #address-cells = <2>;
            #size-cells = <2>;
            ranges;
			sram = <0x42 0x43>;
            memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
            pnode-id = <0x8>;
            mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
            mbox-names = "tx", "rx";
        };
        */

	};


	zynqmp_ipi1 {
		compatible = "xlnx,zynqmp-ipi-mailbox";
		interrupt-parent = <&gic>;
		interrupts = <0 29 4>;
		xlnx,ipi-id = <7>;
		#address-cells = <1>;
		#size-cells = <1>;
		ranges;

		/* APU<->RPU0 IPI mailbox controller */
		ipi_mailbox_rpu0: mailbox@ff990600 {
			reg = <0xff990600 0x20>,
			      <0xff990620 0x20>,
			      <0xff9900c0 0x20>,
			      <0xff9900e0 0x20>;
			reg-names = "local_request_region",
				    "local_response_region",
				    "remote_request_region",
				    "remote_response_region";
			#mbox-cells = <1>;
			xlnx,ipi-id = <1>;
		};

	};

     zynqmp_ipi2 {
        compatible = "xlnx,zynqmp-ipi-mailbox";
        interrupt-parent = <&gic>;
        interrupts = <0 30 4>;
        xlnx,ipi-id = <8>;
        #address-cells = <1>;
        #size-cells = <1>;
        ranges;
 
        /* APU<->RPU1 IPI mailbox controller */
        ipi_mailbox_rpu1: mailbox@ff990640 {
            reg = <0xff3f0b00 0x20>,
                  <0xff3f0b20 0x20>,
                  <0xff3f0940 0x20>,
                  <0xff3f0960 0x20>;
            reg-names = "local_request_region",
                    "local_response_region",
                    "remote_request_region",
                    "remote_response_region";
            #mbox-cells = <1>;
            xlnx,ipi-id = <2>;
        };
    };
};

The memory sections in the below sample described in the reserved-memory node that are then shown in the memory-region property of the remoteproc node are needed for either sections where the ELF binary is loaded into or that the binary uses at runtime for OpenAMP-related IPC communication. 

In this case rpu0vdev0buffer@3ed48000  range of 0x3ed48000 - 0x3ee48000  is not described in the linker script but is used at runtime for shared buffers. As such this range should be described in the reserved-memory node so that it is not unduly mapped in by the kernel. Similarly for rpu0vdev0vring0@3ed40000 and rpu0vdev0vring1@3ed44000

As the default linker script for OpenAMP applications running on a single Cortex-R5 core use the DDR address range of 0x3ed00000  to 0x3ed40000 is used as a section to load the Cortex-R5 ELF binary, this also needs to be described in the reserved-memory node so that the ELF can run there and not overwrite memory that would ordinarily be mapped into the kernel.

In addition to the nodes described in reserved-memory the Cortex-R5 has TCM nodes with their own memory ranges that are coupled with each of the Cortex-R5 cores. As such, the memory ranges for each are described as they might be used for ELF loading with the corresponding R5 remoteproc node that these TCM nodes are respectively coupled with.

If you wish to run concurrent remote processors, ensure that memories used do not overlap as well as IPI usage.

Versal Linux Host running on APU with RPMsg in kernel space and one RPU remote processor.

When running with RPU in split mode and only one RPU is an OpenAMP remote processor, the second RPU can still run another non-OpenAMP application.

Firmware

Guidance for various configurations:

Target Remote ProcessorConfiguration Information
RPU 0

use default application in PetaLinux BSP: build with petalinux-build -c openamp-fw-echo-testd

RPU 1

In PetaLinux project, modify <plnx proj root>/components/yocto/layers/meta-xilinx-tools/recipes-openamp/examples/openamp-fw.inc :  XSCTH_PROC_zynqmp ?= "psv_cortexr5_0" to "psv_cortexr5_1"

Device Tree

Append the following to <plnx proj root>/ project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi

/ {
	reserved-memory {
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		rproc_0_reserved: rproc@3ed00000 {
			no-map;
			reg = <0x0 0x3ed00000 0x0 0x40000>;
		};
		rpu0vdev0vring0: rpu0vdev0vring0@3ed40000 {
			no-map;
			reg = <0x0 0x3ed40000 0x0 0x4000>;
		};
		rpu0vdev0vring1: rpu0vdev0vring1@3ed44000 {
			no-map;
			reg = <0x0 0x3ed44000 0x0 0x4000>;
		};
		rpu0vdev0buffer: rpu0vdev0buffer@3ed48000 {
			no-map;
			reg = <0x0 0x3ed48000 0x0 0x100000>;
		};
		rproc_1_reserved: rproc@3ef00000 {
			no-map;
			reg = <0x0 0x3ef00000 0x0 0x40000>;
		};
		rpu1vdev0vring0: rpu1vdev0vring0@3ef40000 {
			no-map;
			reg = <0x0 0x3ef40000 0x0 0x4000>;
		};
		rpu1vdev0vring1: rpu1vdev0vring1@3ef44000 {
			no-map;
			reg = <0x0 0x3ef44000 0x0 0x4000>;
		};
		rpu1vdev0buffer: rpu1vdev0buffer@3ef48000 {
			no-map;
			compatible = "shared-dma-pool";
			reg = <0x0 0x3ef48000 0x0 0x100000>;
			};
	};

	tcm_0a@ffe00000 {
		no-map;
		reg = <0x0 0xffe00000 0x0 0x10000>;
		phandle = <0x40>;
		status = "okay";
		compatible = "mmio-sram";
	};

	tcm_0b@ffe20000 {
		no-map;
		reg = <0x0 0xffe20000 0x0 0x10000>;
		phandle = <0x41>;
		status = "okay";
		compatible = "mmio-sram";
	};

	tcm_0a@ffe90000 {
		no-map;
		reg = <0x0 0xffe90000 0x0 0x10000>;
		phandle = <0x42>;
		status = "okay";
		compatible = "mmio-sram";
	};

	tcm_0b@ffeb0000 {
		no-map;
		reg = <0x0 0xffeb0000 0x0 0x10000>;
		phandle = <0x43>;
		status = "okay";
		compatible = "mmio-sram";
	};

	rf5ss@ff9a0000 {
		compatible = "xlnx,zynqmp-r5-remoteproc";
		#address-cells = <0x2>;
		#size-cells = <0x2>;
		ranges;
		xlnx,cluster-mode = <1>;
		reg = <0x0 0xff9a0000 0x0 0x10000>;

		r5f_0 {
			compatible = "xilinx,r5f";
			#address-cells = <0x2>;
			#size-cells = <0x2>;
			ranges;
			sram = <0x40 0x41>;
			memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
			power-domain = <0x18110005>;
			mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
			mbox-names = "tx", "rx";
		};
		r5f_1 {
			compatible = "xilinx,r5f";
			#address-cells = <0x2>;
			#size-cells = <0x2>;
			ranges;
			sram = <0x42 0x43>;
			memory-region = <&rproc_1_reserved>, <&rpu1vdev0buffer>, <&rpu1vdev0vring0>, <&rpu1vdev0vring1>;
			power-domain = <0x18110006>;
			mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
			mbox-names = "tx", "rx";
		};
	};

	zynqmp_ipi1 {
		compatible = "xlnx,zynqmp-ipi-mailbox";
		interrupt-parent = <&gic>;
		interrupts = <0 33 4>;
		xlnx,ipi-id = <5>;
		#address-cells = <1>;
		#size-cells = <1>;
		ranges;

		/* APU<->RPU0 IPI mailbox controller */
		ipi_mailbox_rpu0: mailbox@ff990600 {
			reg = <0xff3f0ac0 0x20>,
			      <0xff3f0ae0 0x20>,
			      <0xff3f0740 0x20>,
			      <0xff3f0760 0x20>;
			reg-names = "local_request_region",
				    "local_response_region",
				    "remote_request_region",
				    "remote_response_region";
			#mbox-cells = <1>;
			xlnx,ipi-id = <3>;
		};
		/* APU<->RPU1 IPI mailbox controller */
		ipi_mailbox_rpu1: mailbox@ff990640 {
			reg = <0xff3f0b00 0x20>,
			      <0xff3f0b20 0x20>,
			      <0xff3f0940 0x20>,
			      <0xff3f0960 0x20>;
			reg-names = "local_request_region",
				    "local_response_region",
				    "remote_request_region",
				    "remote_response_region";
			#mbox-cells = <1>;
			xlnx,ipi-id = <4>;
		};
	};
};

The memory sections in the below sample described in the reserved-memory node that are then shown in the memory-region property of the remoteproc node are needed for either sections where the ELF binary is loaded into or that the binary uses at runtime for OpenAMP-related IPC communication. 

In this case rpu0vdev0buffer@3ed48000  range of 0x3ed48000 - 0x3ee48000  is not described in the linker script but is used at runtime for shared buffers. As such this range should be described in the reserved-memory node so that it is not unduly mapped in by the kernel. Similarly for rpu0vdev0vring0@3ed40000 and rpu0vdev0vring1@3ed44000

As the default linker script for OpenAMP applications running on a single Cortex-R5 core use the DDR address range of 0x3ed00000  to 0x3ed40000 is used as a section to load the Cortex-R5 ELF binary, this also needs to be described in the reserved-memory node so that the ELF can run there and not overwrite memory that would ordinarily be mapped into the kernel.

In addition to the nodes described in reserved-memory the Cortex-R5 has TCM nodes with their own memory ranges that are coupled with each of the Cortex-R5 cores. As such, the memory ranges for each are described as they might be used for ELF loading with the corresponding R5 remoteproc node that these TCM nodes are respectively coupled with.


For information on how to build and run the demo refer to UG1186, refer to UG116 linked here with the same steps as used for Zynq UltraScale+ MPSoC device.

The below table details various build and run information and its corresponding page in UG1186.

SectionPage Number
Setting up PetaLinux with OpenAMP32
Settings for the Device Tree Binary Source34
Building the Applications and the Linux Project38
Booting on Hardware38
Running the Example Applications39
Running the Echo Test40
Creating a Boot.bin File45



© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy