OpenAMP 2019.1
This page is intended to complement UG1186 "LibMetal and OpenAMP User Guide" for Zynq-7000 and Zynq UltraScale+ MPSoC.
Table of Contents
Quick try!
e.g for ZCU102:
The echo-test application sends packets from Linux running on quad-core Cortex-A53 to a single cortex-R5 running FreeRTOS which send them back.
- Extract files BOOT.BIN, image.ub and openamp.dtb files from a pre-built Petalinux BSP tarball to sdcard
host shell$ tar xvf xilinx-zcu102-v2019.1-final.bsp --strip-components=4 --wildcards */BOOT.BIN */image.ub */openamp.dtb host shell$ cp BOOT.BIN image.ub openamp.dtb <your sd card>
- Go to u-boot prompt and boot Linux from sdcard
... Hit any key to stop autoboot: 0 ZynqMP> mmcinfo && fatload mmc 0 ${netstart} ${kernel_img} && fatload mmc 0 0x14000000 openamp.dtb Device: sdhci@ff170000 ... reading image.ub 31514140 bytes read in 2063 ms (14.6 MiB/s) reading openamp.dtb 38320 bytes read in 18 ms (2 MiB/s) ZynqMP> bootm $netstart $netstart 0x14000000 ...
- At Linux login prompt enter 'root' for user and 'root' for password and run echo-test demo
plnx_aarch64 login: root Password: root@plnx_aarch64:~# echo image_echo_test > /sys/class/remoteproc/remoteproc0/firmware root@plnx_aarch64:~# echo start > /sys/class/remoteproc/remoteproc0/state [ 177.375451] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc [ 177.384705] remoteproc remoteproc0: Booting fw image image_echo_test, size 644144 [ 177.396832] remoteproc remoteproc0: registered virtio0 (type 7) [ 177.399108] virtio_rpmsg_bus virtio0: rpmsg host is online [ 177.412370] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM. [ 17Starting application... Try to init remoteproc resource Init remoteproc resource succeeded Waiting for events... 7.422089] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up [ 177.442121] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x1 root@plnx_aarch64:~# echo_test Echo test start Open rpmsg dev!
Docs and source code
Documents
- The following document describes libmetal APIs:
URLs to source code
Xilinx Openamp and Libmetal related code
The following location provide access to the code:- https://github.com/Xilinx/open-amp/tree/xilinx-v2019.1
- OpenAMP Library and Demonstration code
- https://github.com/Xilinx/libmetal/tree/xilinx-v2019.1
- Libmetal Library and Demonstration code
- https://github.com/Xilinx/meta-openamp/tree/rel-v2019.1
- Yocto recipe to build OpenAMP and Libmetal
- https://github.com/Xilinx/linux-xlnx/tree/xilinx-v2019.1
- Xilinx version of Linux kernel
- https://github.com/Xilinx/embeddedsw/tree/xilinx-v2019.1
- RPU baremetal and FreeRTOS source code used in XSDK and XSCT
Additional examples
ZynqMP Linux Master running on APU with RPMsg in kernel space and 2 RPU slaves.
Enabling Linux Drivers and other packages
Proceed as indicated in UG1186 to enable Linux remoteproc driver support and other openamp packages.
Device tree:
- Add the folloing to <petalinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi
/include/ "system-conf.dtsi" / { reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges; rproc_0_dma_reserved: rproc@3ed40000{ no-map; compatible = "shared-dma-pool"; reg = <0x0 0x3ed40000 0x0 0x100000>; }; rproc_0_fw_reserved: rproc@3ed00000 { no-map; reg = <0x0 0x3ed00000 0x0 0x40000>; }; rproc_1_fw_reserved: rproc@3ee00000{ no-map; reg = <0x0 0x3ef00000 0x0 0x40000>; }; rproc_1_dma_reserved: rproc@3ee40000 { compatible = "shared-dma-pool"; no-map; reg = <0x0 0x3ef40000 0x0 0x100000>; }; }; zynqmp-rpu { compatible = "xlnx,zynqmp-r5-remoteproc-1.0"; #address-cells = <2>; #size-cells = <2>; ranges; core_conf = "split"; r5_0: r5@0 { #address-cells = <2>; #size-cells = <2>; ranges; memory-region = <&rproc_0_fw_reserved>, <&rproc_0_dma_reserved>; pnode-id = <0x7>; mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>; mbox-names = "tx", "rx"; tcm_0_a: tcm_0@0 { reg = <0x0 0xFFE00000 0x0 0x10000>; pnode-id = <0xf>; }; tcm_0_b: tcm_0@1 { reg = <0x0 0xFFE20000 0x0 0x10000>; pnode-id = <0x10>; }; }; r5_1: r5@1 { #address-cells = <2>; #size-cells = <2>; ranges; memory-region = <&rproc_1_fw_reserved>, <&rproc_1_dma_reserved>; pnode-id = <0x8>; mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>; mbox-names = "tx", "rx"; r5_1_tcm_a: tcm@ffe90000 { reg = <0x0 0xFFE90000 0x0 0x10000>; pnode-id = <0x11>; }; r5_1_tcm_b: tcm@ffeb0000 { reg = <0x0 0xFFEB0000 0x0 0x10000>; pnode-id = <0x12>; }; }; }; zynqmp_ipi1 { compatible = "xlnx,zynqmp-ipi-mailbox"; interrupt-parent = <&gic>; interrupts = <0 29 4>; xlnx,ipi-id = <7>; #address-cells = <1>; #size-cells = <1>; ranges; /* APU<->RPU0 IPI mailbox controller */ ipi_mailbox_rpu0: mailbox@ff90600 { reg = <0xff990600 0x20>, <0xff990620 0x20>, <0xff9900c0 0x20>, <0xff9900e0 0x20>; reg-names = "local_request_region", "local_response_region", "remote_request_region", "remote_response_region"; #mbox-cells = <1>; xlnx,ipi-id = <1>; }; }; zynqmp_ipi2 { compatible = "xlnx,zynqmp-ipi-mailbox"; interrupt-parent = <&gic>; interrupts = <0 30 4>; xlnx,ipi-id = <8>; #address-cells = <1>; #size-cells = <1>; ranges; /* APU<->RPU1 IPI mailbox controller */ ipi_mailbox_rpu1: mailbox@ff3f0b00 { reg = <0xff3f0b00 0x20>, <0xff3f0b20 0x20>, <0xff3f0940 0x20>, <0xff3f0960 0x20>; reg-names = "local_request_region", "local_response_region", "remote_request_region", "remote_response_region"; #mbox-cells = <1>; xlnx,ipi-id = <2>; }; }; }; */ &i2c1 { /* FIXME PL i2c via PCA9306 - u45 */ /* FIXME MSP430 - u41 - not detected */ i2c-mux@74 { /* u34 */ i2c@0 { /* i2c mw 74 0 1 */ /* * IIC_EEPROM 1kB memory which uses 256B blocks * where every block has different address. * 0 - 256B address 0x54 * 256B - 512B address 0x55 * 512B - 768B address 0x56 * 768B - 1024B address 0x57 */ eeprom: eeprom@54 { /* u23 */ compatible = "atmel,24c08"; reg = <0x54>; }; }; }; };
rebuild the device tree
petalinux-build -c device-tree
For RPU 0 (cortex_r5_0)
Proceed as documented in UG1186 to generate remote processor openamp applications with Xilinx SDK.
RPU 0 is also used by default for the pre-built applications provided with Petalinux BSPs.
- Check that RING_TX, RING_RX entries in rsc_table.c are within the reserved memory section defined in the device tree but do not overlap with any other sections within. (E.g. The DDR for RPU 0/1, vring device nodes, etc.)
- Check that the linker script addresses match and fit the DTS zynqmp_r5_rproc memory sections.
Example: Running two echo_test application concurrently on Linux, each communicating to a separate RPU
- Use Petalinux to build/boot your target and then login to Linux console serial port.
- If you haven't added the remote processor firmware applications to your Linux root filesystem (see chapter 3 of ug1186)
- These applications can tftp'd in the target directory /lib/firmware
- Check remoteproc driver is already loaded (normally it is if your device tree is properly configured):
root@plnx_aarch64:/lib/firmware# lsmod Tainted: G virtio_rpmsg_bus 20480 0 - Live 0xffffff800098e000 rpmsg_core 16384 1 virtio_rpmsg_bus, Live 0xffffff800097c000 zynqmp_r5_remoteproc 16384 0 - Live 0xffffff800096a000 remoteproc 40960 1 zynqmp_r5_remoteproc, Live 0xffffff8000959000 virtio 16384 2 virtio_rpmsg_bus,remoteproc, Live 0xffffff8000951000 virtio_ring 20480 2 virtio_rpmsg_bus,remoteproc, Live 0xffffff8000948000 uio_pdrv_genirq 16384 0 - Live 0xffffff8000940000
Start RPU-0:
root@plnx_aarch64:/lib/firmware# echo image_echo_test_r5_0 > /sys/class/remoteproc/remoteproc0/firmware root@plnx_aarch64:/lib/firmware# root@plnx_aarch64:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc0/state root@plnx_aarch64:/lib/firmware# [70982.961635] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc [70982.971366] remoteproc remoteproc0: Booting fw image image_echo_test_r5_0, size 638724 [70982.985672] virtio_rpmsg_bus virtio0: rpmsg host is online [70982.993691] remoteproc remoteproc0: registered virtio0 (type 7) [70983.002197] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM. [7Starting application... Try to init remoteproc resource Init remoteproc resource succeeded Waiting for events... 0983.012367] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up [70983.032821] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x1
Start RPU-1:
root@plnx_aarch64:/lib/firmware# echo image_echo_test_r5_1 > /sys/class/remoteproc/remoteproc1/firmware root@plnx_aarch64:/lib/firmware# root@plnx_aarch64:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc1/state [71185.157615] remoteproc remoteproc1: powering up ff9a0200.zynqmp_r5_rproc [71185.167453] remoteproc remoteproc1: Booting fw image image_echo_test_r5_1, size 639140 [71185.182180] virtio_rpmsg_bus virtio1: rpmsg host is online [71185.190226] remoteproc remoteproc1: registered virtio1 (type 7) [71185.198724] zynqmp_r5_remoteproc ff9a0200.zynqmp_r5_rproc: RPU boot from TCM. [7Starting application... Try to init remoteproc resource Init remoteproc resource succeeded Waiting for events... 1185.208915] remoteproc remoteproc1: remote processor ff9a0200.zynqmp_r5_rproc is now up [71185.229420] virtio_rpmsg_bus virtio1: creating channel rpmsg-openamp-demo-channel addr 0x1
Run echo_test Linux application with RPU-0 using either the serial port or other telnet or ssh connection:
root@plnx_aarch64:/lib/firmware# echo_test Echo test start Open rpmsg dev!
Run a concurrent echo_test Linux application with RPU-1 using another connection (telnet, ssh...):
root@plnx_aarch64:/lib/firmware# echo_test -d /dev/rpmsg1 Echo test start Open rpmsg dev!
Note: The order in which you start the RPU determines which /dev/rpmsgX device is being used with that RPU.
In the above case /dev/rpmsg0 is used for RPU-0.
If however RPU-1 was started first, it would have been associated with /dev/rpmsg0 and RPU-0 would have been using /dev/rpmsg1.
ZynqMP Linux Master running on APU with RPMsg in userspace and 2 RPU slaves.
- Add following device tree content to <petalinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi
/ { reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges; /* Reserved memory for both firmware and shared memory */ rproc_0_reserved: rproc@3ed00000 { no-map; reg = <0x0 0x3ed00000 0x0 0x8000000>; }; }; amba { vring: vring@0 { compatible = "vring_uio"; reg = <0x0 0x3ed40000 0x0 0x40000>; }; shm0: shm@0 { compatible = "shm_uio"; reg = <0x0 0x3ed80000 0x0 0x80000>; }; vring1: vring@1 { compatible = "vring_uio"; reg = <0x0 0x3ef00000 0x0 0x40000>; }; shm1: shm@1 { compatible = "shm_uio"; reg = <0x0 0x3ef40000 0x0 0x80000>; }; ipi0: ipi@0 { compatible = "ipi_uio"; reg = <0x0 0xff340000 0x0 0x1000>; interrupt-parent = <&gic>; interrupts = <0 29 4>; }; }; };
2. Enable the OpenAMP and libmetal packages with "petalinux-config -c rootfs":
Filesystem Packages ---> libs ---> libmetal ---> [*] libmetal open-amp ---> [*] open-amp
Building Linux OpenAMP user space application with XSDK
- Create Empty Application for Linux and for A53
- OS: Linux
- Processor: psu_cortexa53
- Linux sysroot: the sysroot from the Petalinux project:
- <plnx-proj-root>/images/linux/sdk/sysroots/aarch64-xilinx-linux
- Once project is built, select properties:
- C/C++ Build --> Settings
- Tool Settings Tab Libraries
- Libaries (-l) add "metal" and "open_amp"
- Miscellaneous
- in Linker Flags, add "--sysroot=/<plnx-proj-root>/images/linux/sdk/sysroots/aarch64-xilinx-linux"
- Tool Settings Tab Libraries
- C/C++ Build --> Settings
- Copy the following files for OpenAMP application for ZynqMP into the directory:
- And the Linux files for one of the three OpenAMP applications in the link:
- OpenAMP echo-test
- OpenAMP matrix multiplication Demo
- OpenAMP RPC Demo
- When building the application note a few configuration parameters:
- If building Linux application to communicate with RPU 1:
- replace the following in platform_info.c: change IPI_MASK to 0x200
- The RSC_RPROC_MEM entries must be within the corresponding vring device tree node.
- Update the following inside of the Linux application's platform_info.c to reflect possible changes to the device tree nodes:
- IPI_DEV_NAME
- VRING_DEV_NAME
- SHM_DEV_NAME
- The above means that if the new vring entry in the device tree entry is at 0x3ef00000, then the string for the VRING_DEV_NAME should now be "3ef00000.vring" as shown in sysfs on target.
- Update RING_TX and RING_RX to reflect the vring entry in the device tree.
- If building Linux application to communicate with RPU 1:
Install XSDK-built Linux applications in the Petalinux project
The Linux applications can be installed via a yocto recipe as follows:
SUMMARY = "Simple test application" SECTION = "PETALINUX/apps" LICENSE = "MIT" LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302" SRC_URI = "file://<myapp>" S = "${WORKDIR}" INSANE_SKIP_${PN} = "arch" do_install() { install -d ${D}/lib/firmware install -m 0644 ${S}/<myapp> ${D}/lib/firmware/<myapp> } FILES_${PN} = "/lib/firmware/<myapp>
Building remote processor demo applications to run on RPU 0 (cortex_r5_0) with Xilinx SDK
Proceed as documented in UG1186 to generate remote processor openamp applications with Xilinx SDK.
Building remote processor demo applications to run on RPU 1 (cortex_r5_1) with Xilinx SDK
Remote processor applications (echo_test, matrix multiply, rpc demo) code is by default set to run with RPU 0 and need to be slightly modified for RPU-1.
- When RPU-1 is selected in Xilinx SDK, the code generated need to be modified as follow: * Edit rsc_table.c such that RING_TX and RING_RX are within the memory for the vring device tree node.
- Check that the linker script addresses match are within reserved memory but do not overlap.
Example: Running two proxy_app-openamp application concurrently on Linux, each communicating to one RPU
1. Generate BOOT.BIN with RPU firmware built from XSDK. Below is a BIF example:
the_ROM_image: { [fsbl_config] a53_x64 [bootloader] <plnx project>/images/linux/zynqmp_fsbl.elf [destination_device=pl] <plnx project/images/linux/download.bit [destination_cpu=pmu] <plnx project>/images/linux/pmufw.elf [destination_cpu=r5-0] /path/to/openamp_firmware_on_rpu0.elf [destination_cpu=r5-1] /path/to/openamp_firmware_on_rpu1.elf [destination_cpu=a53-0, exception_level=el-3, trustzone] <plnx project>/images/linux/bl31.elf [destination_cpu=a53-0, exception_level=el-2] <plnx project>/images/linux/u-boot.elf }
2. When booting ,the RPU firmware for RPU 0 and RPU 1 should have output resembling the following:
Starting application... Try to init remoteproc resource Init remoteproc resource succeeded Waiting for events...
3. Run proxy_app-openamp Linux application with RPU-0 using either the serial port or other telnet or ssh connection:
root@plnx_aarch64# proxy_app-openamp metal: info: metal_uio_dev_open: No IRQ for device 3ed40000.vring. metal: info: Initializating I/Os redirection... metal: info: metal_uio_dev_open: No IRQ for device 3ed40000.vring. metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm. Master> Remote proc resource initialized. Master> RPMSG channel has created. Remote>Baremetal Remote Procedure Call (RPC) Demonstration
4. Run a concurrent proxy_app-openamp Linux application with RPU-1 using another connection (telnet, ssh...):
root@xilinx-zcu102-2019_2:~# linux_proxy_userspace_to_r5_1.elf metal: info: metal_uio_dev_open: No IRQ for device 3ef00000.vring. metal: iInitializating I/Os redirection... nfo: metal_uio_dev_open: No IRQ for device 3ef00000.vring. metal: info: metal_uio_dev_open: No IRQ for device 3ef40000.shm. Master> Remote proc resource initialized. Master> RPMSG channel has created. Remote>Baremetal Remote Procedure Call (RPC) Demonstration
ZynqMP Linux Master running on APU with RPMsg in kernel space and one RPU slave.
When running with RPU in split mode and only one RPU is an OpenAMP slave, the second RPU can still run another non-openamp application.
- RPU-0 slave:
- Petalinux BSPs provide a default template to generate a DTB with support for OpenAMP running on RPU-0, see:
- <petalinux project>/project-spec/meta-usr/recipes-bsp/device-tree/files/openamp-overlay.dtsi >
- Add its content to file <petalinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi
- Petalinux BSPs provide a default template to generate a DTB with support for OpenAMP running on RPU-0, see:
- RPU-1 slave:
- Proceed as for the two RPU configuration above and edit your device tree to remove the unused 'zynmp_r5_rproc' entry and associated nodes (tcm, pd,...) that may not be needed any more.
ZynqMP Linux Master running on APU Linux loads OpenAMP RPU Firmware
Overview
The information below is intended to provide guidance to users who wish to set up a Linux on APU + Bare-metal/RTOS on RPU. This configuration relies on the FSBL to start the software running on the APU, and then APU Linux using remoteproc will load the RPU.
To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
- As directed in UG 1186, create an application inside of the Petalinux project to install the firmware into the Linux host's file system in /lib/firmware. To create a template for a yocto recipe to install the firmware, do the following:
- Create yocto application inside of Petalinux project
petalinux-create -t apps --template install -n <app_name> --enable
- copy firmware (.elf file) into project-spec/meta-user/recipes-apps/<app_name>/files/ directory
- Modify the project-spec/meta-user/recipes-apps/<app_name>/<app_name>.bb to install the remote processor firmware in the RootFS as follows:
SUMMARY = "Simple test application" SECTION = "PETALINUX/apps" LICENSE = "MIT" LIC_FILES_CHKSUM ="file:${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302" SRC_URI = "file:<myfirmware>" S = "${WORKDIR}" INSANE_SKIP_${PN} = "arch" do_install() { install -d ${D}/lib/firmware install -m 0644 ${S}/<myfirmware> ${D}/lib/firmware/<myfirmware> } FILES_${PN} = "/lib/firmware/<myfirmware>
- Build Linux images with the "petalinux-build" command inside the PetaLinux project.
- Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges; /* R5 0 firmware memory in DDR */ rproc_0_fw_reserved: rproc@3ed00000 { no-map; reg = <0x0 0x3ed00000 0x0 0x40000>; }; /* DMA shared memory between APU and RPU */ rproc_0_dma_reserved: rproc@3ed40000 { compatible = "shared-dma-pool"; no-map; reg = <0x0 0x3ed40000 0x0 0x100000>; }; }; zynqmp-r5-remoteproc@0 { compatible = "xlnx,zynqmp-r5-remoteproc-1.0"; core_conf = "split"; #address-cells = <2>; #size-cells = <2>; ranges; r5-0: r5@0 { #address-cells = <2>; #size-cells = <2>; ranges; memory-region = <&rproc_0_fw_reserved>, <&rproc_0_dma_reserved>; pnode-id = <0x7>; mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>; mbox-names = "tx", "rx"; tcm-a: tcm@0 { reg = <0x0 0xFFE00000 0x0 0x10000>, pnode-id = <0xf>; }; tcm-b: tcm@1 { reg = <0x0 0xFFE20000 0x0 0x10000>, pnode-id = <0x10>; }; }; } ; zynqmp_ipi { compatible = "xlnx,zynqmp-ipi-mailbox"; interrupt-parent = <&gic>; interrupts = <0 29 4>; xlnx,ipi-id = <7>; #address-cells = <1>; #size-cells = <1>; ranges; /* APU<->RPU0 IPI mailbox controller */ ipi_mailbox_rpu0: mailbox@ff90600 { reg = <0xff990600 0x20>, <0xff990620 0x20>, <0xff9900c0 0x20>, <0xff9900e0 0x20>; reg-names = "local_request_region", "local_response_region", "remote_request_region", "remote_response_region"; #mbox-cells = <1>; xlnx,ipi-id = <1>; }; };
Run the following to build your petalinux project.
petalinux-build
After booting the Petalinux project, run the following to boot the RPU firmware onto RPU.
echo <name of firmware> > /sys/class/remoteproc/remoteproc0/firmware echo start > /sys/class/remoteproc/remoteproc0/state
- Run Linux application
- Stop firmware
ZynqMP Linux loads RPU, Linux OpenAMP Application talks to RPU OpenAMP Application
Overview
Setting up Remote Firmware
The user can use, for example, a similar structure as the OpenAMP RPU applications created in the Building Remote Applications sections of UG1186 .
To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
- As directed in UG 1186, create a yocto recipe inside of the Petalinux project to install the firmware into the Linux host's file system in /lib/firmware.
Refer to the previous example: "ZynqMP Linux Master running on APU Linux loads OpenAMP RPU Firmware" for a guide on how to create such a yocto recipe.
- Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges; /* R5 0 firmware memory in DDR */ rproc_0_fw_reserved: rproc@3ed00000 { no-map; reg = <0x0 0x3ed00000 0x0 0x40000>; }; /* DMA shared memory between APU and RPU */ rproc_0_dma_reserved: rproc@3ed40000 { compatible = "shared-dma-pool"; no-map; reg = <0x0 0x3ed40000 0x0 0x100000>; }; }; zynqmp-r5-remoteproc@0 { compatible = "xlnx,zynqmp-r5-remoteproc-1.0"; core_conf = "split"; #address-cells = <2>; #size-cells = <2>; ranges; r5-0: r5@0 { #address-cells = <2>; #size-cells = <2>; ranges; memory-region = <&rproc_0_fw_reserved>, <&rproc_0_dma_reserved>; pnode-id = <0x7>; mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>; mbox-names = "tx", "rx"; tcm-a: tcm@0 { reg = <0x0 0xFFE00000 0x0 0x10000>, pnode-id = <0xf>; }; tcm-b: tcm@1 { reg = <0x0 0xFFE20000 0x0 0x10000>, pnode-id = <0x10>; }; }; } ; zynqmp_ipi { compatible = "xlnx,zynqmp-ipi-mailbox"; interrupt-parent = <&gic>; interrupts = <0 29 4>; xlnx,ipi-id = <7>; #address-cells = <1>; #size-cells = <1>; ranges; /* APU<->RPU0 IPI mailbox controller */ ipi_mailbox_rpu0: mailbox@ff90600 { reg = <0xff990600 0x20>, <0xff990620 0x20>, <0xff9900c0 0x20>, <0xff9900e0 0x20>; reg-names = "local_request_region", "local_response_region", "remote_request_region", "remote_response_region"; #mbox-cells = <1>; xlnx,ipi-id = <1>; }; /* UIO device node for vring device memory */ vring: vring@0 { compatible = "vring_uio"; reg = <0x0 0x3ed40000 0x0 0x40000>; }; /* UIO device node for shared memory device memory */ shm0: shm@0 { compatible = "shm_uio"; reg = <0x0 0x3ed80000 0x0 0x80000>; }; /* UIO device node for IPI device */ ipi0: ipi@0 { compatible = "ipi_uio"; reg = <0x0 0xff340000 0x0 0x1000>; interrupt-parent = <&gic>; interrupts = <0 29 4>; }; }; };
Build Petalinux with required packages
1. Enable the required packages with petalinux tools. For example, go to your petalinux project's topmost directory and start the rootfs configuration utility:
petalinux-config -c rootfs
2. Enable the required rootfs packages for the application. If you are running the sample applications from UG1186, the packages would be enabled by the following:
Filesystem Packages --> libs --> libmetal --> [ * ] libmetal --> openamp --> [ * ] open-amp --> misc --> openamp-fw-echo-testd --> [ * ] openamp-fw-echo-testd --> openamp-fw-mat-muld --> [ * ] openamp-fw-mat-muld --> openamp-fw-rpc-demod --> [ * ] openamp-fw-rpc-demod --> rpmsg-echo-test --> [ * ] rpmsg-echo-test --> rpmsg-mat-mul --> [ * ] rpmsg-mat-mul --> rpmsg-proxy-app --> [ * ] rpmsg-proxy-app
3. Then build the petalinux project.
petalinux-build
Load firmware and start the firmware step-by-step:
echo <fw_name> /sys/class/remoteproc/remoteproc0/firmware echo start > /sys/class/remoteproc/remoteproc0/state
Stop firwmare
echo stop > /sys/class/remoteproc/remoteproc0/state
ZynqMP on APU Linux communicate with RPU via Shared Memory without OpenAMP
Overview
The information below is intended to provide guidance to users who wish to set up a Linux + Bare-metal,RTOS, etc. We make the assumption that the Linux and RPU will communicate via Shared Memory. IPI can be used to further coordinate communication between processors. The use of IPI is documented in the section titled
- Device Tree Settings for Linux
- To make the shared memory device accessible to Linux running on APU, there must be some modifications in the device tree.
- For example, if configuring the device tree for the OpenAMP echo_test demo found here , a shared memory node is placed in the amba section. E.g.
/{ amba { /* UIO device node for shared memory device memory */ shm0: shm@0 { compatible = "shm_uio"; reg = <0x0 0x3ed80000 0x0 0x80000>; }; }; };
Configuring the Petalinux project
The OpenAMP applications use Libmetal to access shared memory. Thus libmetal package in your petalinux project must be enabled. This package can be enabled by going into the rootfs by using the petalinux-config utility.
run:
petalinux-config -c rootfs
and then in the utility enable the following packages:
Filesystem Packages --> libs --> libmetal --> [ * ] libmetal --> openamp --> [ * ] open-amp --> misc --> openamp-fw-echo-testd --> [ * ] openamp-fw-echo-testd --> openamp-fw-mat-muld --> [ * ] openamp-fw-mat-muld --> openamp-fw-rpc-demod --> [ * ] openamp-fw-rpc-demod --> rpmsg-echo-test --> [ * ] rpmsg-echo-test --> rpmsg-mat-mul --> [ * ] rpmsg-mat-mul --> rpmsg-proxy-app --> [ * ] rpmsg-proxy-app
Communicating via Shared memory
The below information is constructed with the assumption that the shared memory node is visible in Linux userspace.
Using the Libmetal API, we can read from and write to shared memory with the following functions:
static inline uint64_t metal_io_read(struct metal_io_region *io, unsigned long offset, memory_order order, int width); int metal_io_block_read(struct metal_io_regoin *io, unsigned long offset, void * restrict dst, int len);
and
static inline void metal_io_write(struct metal_io_region *io, unsigned long offset, uint64_t value, memory_order order, int width); int metal_io_block_write(struct metal_io_region *io, unsigned long offset, const void *restrict src, int len);
An example showing the use of these functions in Linux userspace can be found here . At the link are some examples showing the use of reading from, and writing to shared memory as well as initialization and cleanup of Libmetal resources.
How to Generate BOOT.BIN
This section assumes that your Petalinux project has already run Petalinux-build to build all the necessary components for your embedded Linux solution in addition to firmware to run on an RPU. Use Petalinux tools to construct the BOOT.BIN that you can then put on an SD card to boot your ZynqMP board. Below is a sample bootgen.bif file that you can create or modify in the top level directory of your Petalinux project that you can use to help construct the BOOT.BIN:
the_ROM_image: { [fsbl_config] a53_x64 [bootloader, destination_cpu=a53-0] ./images/linux/zynqmp_fsbl.elf [pmufw_image, destination_cpu=a53-0] ./images/linux/pmufw.elf [destination_cpu=a53-0, exception_level=el-3, trustzone] ./images/linux/bl31.elf [destination_cpu=a53-0, exception_level=el-2] ./images/linux/u-boot.elf }
Using this .bif file and petalinux tools, we will build a BOOT.BIN file that you can use for your ZynqMP board.
petalinux-package --boot --force --u-boot ./images/linux/u-boot.elf --cpu r5-0 --add /path/to/firmware
Here we have shown a few things:
- We specify to which RPU the data file (your firmware) will go with the --cpu option and r5-0. You can also use r5-1 or r5-lockstep options.
- The --add option where the following argument specifies the path to your firmware.
- The --force option overwrites the existing BOOT.BIN file into the current directory.
- The --u-boot option that specifies the location of the u-boot.elf
Feature Changes
Module Name | Change | Link |
OpenAMP echo test and matrix multiplication demos | Run without manual input from user. Instead, add -n command line option to specify number of iterations | Link to patch |
Xen Dom0 and DomU support for OpenAMP running in RPMsg userspace | support for these two configurations in 2018.1 | |
RPMsg char driver support (experimental) | Able to run RPMsg OpenAMP demos using rpmsg char driver. The patch enabling this is not yet in upstream. | Link to patch |
libmetal MicroBlaze baremetal support | Add MicroBlaze baremetal support to libmetal | Link to source code |
Add fix for OpenAMP echo demo running on RPU0 and RPU1 hanging for large payload number | Update GIC driver in embeddedsw such that relevant interrupts can be mapped or unmapped to CPU. | Link to patch |
© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy