Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents
excludeTable of Contents

Getting Started with the Pre-Built Images

Here are the basic steps to boot Linux and run an openamp application using pre-built images.

e.g for ZCU102:
The echo-test application sends packets from Linux running on quad-core Cortex-A53 to a single cortex-R5 running FreeRTOS which sends them back.
  • Extract the files BOOT.BIN, image.ub and openamp.dtb files from a pre-built PetaLinux BSP to an SD card.


    Code Block
    themeMidnight
    host shell$ petalinux-create -star xvf xilinx-zcu102-v2020.12-final.bsp --strip-components=4 --wildcards */BOOT.BIN */image.ub */openamp.dtb
    host shell$ cp BOOT.BIN image.ub openamp.dtb <your sd card>


Note: Alternatively, if you already created a PetaLinux project with a provided BSP for your board, pre-built images can also be found under the <your project>/pre-built/linux/images/ directory.
  • Go to the u-boot prompt and boot Linux from the SD card.


    Code Block
    themeMidnight
    ...
    Hit any key to stop autoboot:  0 
    u-boot> fatload mmc 0 0x3000000 Image
    u-boot> fatload mmc 0 0x2000000 openamp.dtb
    u-boot> fatload mmc 0 0x2A00000  rootfs.cpio.gz.u-boot
    u-boot> bootm 0x3000000 0x2A00000 0x2000000
    
    
    ...


Note: As an alternative to all steps above to boot from an SD card, you can boot the board via JTAG. For this you need to have connected a JTAG cable, installed JTAG drivers and created a PetaLinux project using a provided BSP. You would then go into the <your project>/pre-built/linux/images directory and replace file system.dtb by openamp.dtb, then enter: "petalinux-boot --jtag --prebuilt 3"
  • At Linux login prompt enter 'root' for user and 'root' for password and run echo-test demo


    Code Block
    themeMidnight
    plnx_aarch64 login: root
    Password: 
    root@plnx_aarch64:~# echo image_echo_test > /sys/class/remoteproc/remoteproc0/firmware 
    root@plnx_aarch64:~# echo start > /sys/class/remoteproc/remoteproc0/state   
    [  177.375451] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc
    [  177.384705] remoteproc remoteproc0: Booting fw image image_echo_test, size 644144
    [  177.396832] remoteproc remoteproc0: registered virtio0 (type 7)
    [  177.399108] virtio_rpmsg_bus virtio0: rpmsg host is online
    [  177.412370] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM.
    [  17Starting application...
    Try to init remoteproc resource
    Init remoteproc resource succeeded
    Waiting for events...
    7.422089] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up
    [  177.442121] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x1
    root@plnx_aarch64:~# echo_test
     Echo test start 
     Open rpmsg dev! 


Docs and source code

Documents

  • The following document describes libmetal APIs:
    View file
    namelibmetal-doc-20170418.pdf

URLs to source code


Xilinx Openamp and Libmetal related code

The following locations provide access to the code:

Additional examples

ZynqMP Linux Master running on APU with RPMsg in kernel space and one RPU slave.

When running with RPU in split mode and only one RPU is an OpenAMP slave, the second RPU can still run another non-openamp application.

Firmware:

  • RPU 0:
    • use default application in petalinux BSP: build with petalinux-build -c openamp-fw-echo-testd
  • RPU 1:
    • in petalinux project, modify <plnx proj root>/components/yocto/layers/meta-xilinx-tools/recipes-openamp/examples/openamp-fw.inc :  XSCTH_PROC_zynqmp ?= "psu_cortexr5_0" to "psu_cortexr5_1"

Device Tree:

  • append the following to <plnx proj root>/ project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi
  • note the snippet within for rpu1 instead of rpu0
  • The memory sections in the below sample described in the reserved-memory node that are then shown in the memory-region property of the remoteproc node are needed for either sections where the ELF binary is loaded into or that the binary uses at runtime for OpenAMP-related IPC communication. 
    • In this case rpu0vdev0buffer@3ed48000  range of 0x3ed48000 - 0x3ee48000  is not described in the linker script but is used at runtime for shared buffers. As such this range should be described in the reserved-memory node so that it is not unduly mapped in by the kernel. Similarly for rpu0vdev0vring0@3ed40000 and rpu0vdev0vring1@3ed44000
    • As the default linker script for OpenAMP applications running on R5 use the DDR address range of 0x3ed00000  to 0x3ed40000 is used as a section to load the R5 ELF binary, this also needs to be described in the reserved-memory node so that the ELF can run there and not overwrite memory that would ordinarily be mapped into the kernel.
    • In addition to the nodes described in reserved-memory the R5 has TCM nodes with their own memory ranges that are coupled with each of the R5 cores. As such, the memory ranges for each are described as they might be used for ELF loading with the corresponding R5 remoteproc node that these TCM nodes are respectively coupled with.
Code Block
themeMidnight
/ {
	reserved-memory {
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		rpu0vdev0vring0: rpu0vdev0vring0@3ed40000 {
			no-map;
			reg = <0x0 0x3ed40000 0x0 0x4000>;
		};
		rpu0vdev0vring1: rpu0vdev0vring1@3ed44000 {
			no-map;
			reg = <0x0 0x3ed44000 0x0 0x4000>;
		};
		rpu0vdev0buffer: rpu0vdev0buffer@3ed48000 {
			no-map;
			reg = <0x0 0x3ed48000 0x0 0x100000>;
		};
		rproc_0_reserved: rproc@3ed00000 {
			no-map;
			reg = <0x0 0x3ed00000 0x0 0x40000>;
		};
	};
 
	zynqmp-rpu {
		compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		core_conf = "split";
		reg = <0x0 0xFF9A0000 0x0 0x10000>;
		r5_0: r5@0 {
			#address-cells = <2>;
			#size-cells = <2>;
			ranges;
			memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
			pnode-id = <0x7>;
			mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
			mbox-names = "tx", "rx";
			tcm_0_a: tcm_0@0 {
				reg = <0x0 0xFFE00000 0x0 0x10000>;
				pnode-id = <0xf>;
			};
			tcm_0_b: tcm_0@1 {
				reg = <0x0 0xFFE20000 0x0 0x10000>;
				pnode-id = <0x10>;
			};
		};
		/* if instead for RPU1 use the following: 
    	r5_1: r5@1 {
        	#address-cells = <2>;
        	#size-cells = <2>;
        	ranges;
        	memory-region = <&rproc_0_fw_reserved>,
                <&rproc_0_dma_reserved>;
        	pnode-id = <0x8>;
        	mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
        	mbox-names = "tx", "rx";
        	tcm_a: tcm@0 {
            	reg = <0x0 0xFFE90000 0x0 0x10000>;
            	pnode-id = <0x11>;
        	};
        	tcm_b: tcm@1 {
            	reg = <0x0 0xFFEb0000 0x0 0x10000>;
            	pnode-id = <0x12>;
        	};
    	};
		*/
	};


	zynqmp_ipi1 {
		compatible = "xlnx,zynqmp-ipi-mailbox";
		interrupt-parent = <&gic>;
		interrupts = <0 29 4>;
		xlnx,ipi-id = <7>;
		#address-cells = <1>;
		#size-cells = <1>;
		ranges;

		/* APU<->RPU0 IPI mailbox controller */
		ipi_mailbox_rpu0: mailbox@ff990600 {
			reg = <0xff990600 0x20>,
			      <0xff990620 0x20>,
			      <0xff9900c0 0x20>,
			      <0xff9900e0 0x20>;
			reg-names = "local_request_region",
				    "local_response_region",
				    "remote_request_region",
				    "remote_response_region";
			#mbox-cells = <1>;
			xlnx,ipi-id = <1>;
		};
	};
};


...