OpenAMP 2019.2


This page is intended to complement  UG1186 "LibMetal and OpenAMP User Guide"  for Zynq-7000 and Zynq UltraScale+ MPSoC.


Quick try!

Here are the basic steps to boot Linux and run an openamp application using pre-built images.

e.g for ZCU102:
The echo-test application sends packets from Linux running on quad-core Cortex-A53 to a single cortex-R5 running FreeRTOS which send them back.
  • Extract files BOOT.BIN, image.ub and openamp.dtb files from a pre-built PetaLinux BSP to sdcard
  • host shell$ petalinux-create -s xilinx-zcu102-v2019.2-final.bsp --strip-components=4 --wildcards */BOOT.BIN */image.ub */openamp.dtb
    host shell$ cp BOOT.BIN image.ub openamp.dtb <your sd card>
Note: Alternatively, if you already created a PetaLinux project with a provided BSP for your board, pre-built images can also be found under the <your project>/pre-built/linux/images/ directory.
  • Go to u-boot prompt and boot Linux from sdcard
  • ...
    Hit any key to stop autoboot:  0 
    
    > mmcinfo && fatload mmc 0 ${netstart} ${kernel_img} &&  fatload mmc 0 0x14000000 openamp.dtb
    Device: sdhci@ff170000
    ...
    reading image.ub
    31514140 bytes read in 2063 ms (14.6 MiB/s)
    reading openamp.dtb
    38320 bytes read in 18 ms (2 MiB/s)
    > bootm $netstart $netstart 0x14000000
    ...
Note: As an alternative to all steps above to sd-boot, you can jtag-boot the board. For this you need to have connected a jtag cable, installed jtag drivers and created a PetaLinux project using a provided BSP. You would then go into the <your project>/pre-built/linux/images directory and replace file system.dtb by openamp.dtb, then enter: "petalinux-boot --jtag --prebuilt 3"
  • At Linux login prompt enter 'root' for user and 'root' for password and run echo-test demo
  • plnx_aarch64 login: root
    Password: 
    root@plnx_aarch64:~# echo image_echo_test > /sys/class/remoteproc/remoteproc0/firmware 
    root@plnx_aarch64:~# echo start > /sys/class/remoteproc/remoteproc0/state   
    [  177.375451] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc
    [  177.384705] remoteproc remoteproc0: Booting fw image image_echo_test, size 644144
    [  177.396832] remoteproc remoteproc0: registered virtio0 (type 7)
    [  177.399108] virtio_rpmsg_bus virtio0: rpmsg host is online
    [  177.412370] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM.
    [  17Starting application...
    Try to init remoteproc resource
    Init remoteproc resource succeeded
    Waiting for events...
    7.422089] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up
    [  177.442121] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x1
    root@plnx_aarch64:~# modprobe rpmsg_user_dev_driver
    [  188.089835] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: rpmsg_user_dev_rpmsg_drv_probe
    [  188.101250] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: new channel: 0x400 -> 0x1!
    root@plnx_aarch64:~# echo_test
     Echo test start 
     Open rpmsg dev! 
    [  190.364739] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: Sent init_msg to target 0x1.

Docs and source code

Documents

  • The following document describes libmetal APIs:

URLs to source code


Xilinx Openamp and Libmetal related code

The following location provide access to the code:

Additional examples

OpenAMP Demos using RPMsg in kernel-space on Versal

  1. Configure PetaLinux to run the demo
    1. Download 2019.2 Versal BSP
    2. petalinux-config -c rootfs
      1. enable the following:
        • libmetal
        • open-amp
        • rpmsg-(demo name)
        • sysfsutils
        • libsysfs
        • openamp-echo-testd
    3. modify system-user.dtsi with the following
      1. /include/ "system-conf.dtsi"
        / {
        reserved-memory {
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;
                rproc_0_dma: rproc@3ed40000 {
                    no-map;
                    compatible = "shared-dma-pool";
                    reg = <0x0 0x3ed40000 0x0 0x100000>;
                };
                rproc_0_reserved: rproc@3ed00000 {
                    no-map;
                    reg = <0x0 0x3ed00000 0x0 0x40000>;
                };
                rproc_1_dma: rproc@3ef40000 {
                    no-map;
                    compatible = "shared-dma-pool";
                    reg = <0x0 0x3ef40000 0x0 0x100000>;
                };
                rproc_1_reserved: rproc@3ef00000 {
                    no-map;
                    reg = <0x0 0x3ef00000 0x0 0x40000>;
                };
            };
         
            zynqmp-rpu {
                compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
                core_conf = "split";
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;
                r5_0: r5@0 {
                    #address-cells = <2>;
                    #size-cells = <2>;
                    ranges;
                    memory-region = <&rproc_0_reserved>, <&rproc_0_dma>;
                    pnode-id = <0x18110005>;
                    mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
                    mbox-names = "tx", "rx";
                    tcm_0_a: tcm_0@0 {
                        reg = <0x0 0xFFE00000 0x0 0x10000>;
                        pnode-id = <0x1831800b>;
                    };
                    tcm_0_b: tcm_0@1 {
                        reg = <0x0 0xFFE20000 0x0 0x10000>;
                        pnode-id = <0x1831800c>;
                    };
                };
         
                r5_1: r5@1 {
                    #address-cells = <2>;
                    #size-cells = <2>;
                    ranges;
                    memory-region = <&rproc_1_reserved>, <&rproc_1_dma>;
                    pnode-id = <0x18110006>;
                    mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
                    mbox-names = "tx", "rx";
                    tcm_1_a: tcm_1@0 {
                        reg = <0x0 0xFFE90000 0x0 0x10000>;
                        pnode-id = <0x603060d>;
                    };
                    tcm_1_b: tcm_1@1 {
                        reg = <0x0 0xFFEB0000 0x0 0x10000>;
                        pnode-id = <0x603060e>;
                    };
                };
            };
         
         
         zynqmp_ipi1 {
                compatible = "xlnx,zynqmp-ipi-mailbox";
                interrupt-parent = <&gic>;
                interrupts = <0 33 4>;
                xlnx,ipi-id = <5>;
                #address-cells = <1>;
                #size-cells = <1>;
                ranges;
         
                /* APU<->RPU0 IPI mailbox controller */
                ipi_mailbox_rpu0: mailbox@ff990600 {
                    reg = <0xff3f0ac0 0x20>,
                          <0xff3f0ae0 0x20>,
                          <0xff3f0740 0x20>,
                          <0xff3f0760 0x20>;
                    reg-names = "local_request_region",
                            "local_response_region",
                            "remote_request_region",
                            "remote_response_region";
                    #mbox-cells = <1>;
                    xlnx,ipi-id = <3>;
                };
                /* APU<->RPU1 IPI mailbox controller */
        
        		ipi_mailbox_rpu1: mailbox@ff990640 {
        			reg = <0xff3f0b00 0x20>,
        			      <0xff3f0b20 0x20>,
        			      <0xff3f0940 0x20>,
        			      <0xff3f0960 0x20>;
                   xlnx,ipi-id = <2>;
               }
        
                    reg-names = "local_request_region",
                            "local_response_region",
                            "remote_request_region",
                            "remote_response_region";
                    #mbox-cells = <1>;
                    xlnx,ipi-id = <4>;
                };
            };
        };
        
    4. build project with "petalinux-build"
    1. Running the Demo on Target
      1. After starting firmware on target the output from running Linux-side the output is as follows:

        $ echo_test -d <rpmsg channel name>
         Echo test start
         Open rpmsg dev /dev/rpmsg0!
         **************************************

Libmetal Demo on Versal

  1. R5 Baremetal Application
    1. source code can be found here
  2. Configure PetaLinux to run the demo
    1. Download 2019.2 Versal BSP
    2. petalinux-config -c rootfs
      1. enable the following:
        • libmetal
        • libmetal-demo
    3. install the libmetal R5 application into the PetaLinux -generated rootfs on target with the following:
      1. create PetaLinux app to install the firmware
        1. petalinux-create -t apps --template install --name libmetal-fw --enable
        2. modify recipe as follows:
          1. copy firmware into project-spec/meta-user/recipes-apps/libmetal-fw/files/libmetal-fw 
          2. modify libmetal-fw.bb as follows
            1. #
              # This file is the libmetal-fw recipe.
              #
               
              SUMMARY = "Simple libmetal-fw application"
              SECTION = "PETALINUX/apps"
              LICENSE = "MIT"
              LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
               
              SRC_URI = "file://libmetal-fw \
                  "  
              INSANE_SKIP_${PN} = "arch"
              S = "${WORKDIR}"
               
              do_install() {
                       install -d ${D}/lib/firmware/
                       install -m 644 ${S}/libmetal-fw ${D}/lib/firmware/
              }
              FILES_${PN} = " /lib/firmware/libmetal-fw "
    4. modify system-user.dtsi with the following
      1. / {
        
        	reserved-memory {
        		#address-cells = <2>;
        		#size-cells = <2>;
        		ranges;
        		rproc_0_reserved: rproc@3ed00000 {
        			no-map;
        			reg = <0x0 0x3ed00000 0x0 0x2000000>;
        		};
        	};
        	amba {
        		/* Shared memory (APU to RPU) */
        		shm0: shm@0 {
        			compatible = "shm";
        			reg = <0x0 0x3ed80000 0x0 0x01000000>;
        
        
        		};
        		/* IPI device */
        		ipi0: ipi@0 {
        			compatible = "ipi_uio";
        			reg = <0x0 0xff360000 0x0 0x1000>;
        			interrupt-parent = <&gic>;
        			interrupts = <0 33 4>;
        		};
        
        		timer@ff0e0000 {
                                compatible = "ttc-uio";
                                reg = <0x0 0xff0e0000 0x0 0x1000>;
                        };
        	};
        };
        
        &sdhci0 {
        	status = "disabled";
        };
        
        &sdhci1 {
        	status = "disabled";
        };
        
        
    5. build project with "petalinux-build"
  3. Running the Demo on Target
    1. After starting firmware on target the output from running Linux-side the output is as follows:
      1. # <linux libmetal application
        metal: warning: skipped page size 2097152 - invalid args
        CLIENT> ****** libmetal demo: shared memory ******
        metal: info: meta
        SERVER> Demo has started.
        SERVER> Shared memory test finished
        SERVER> ====== libmetal demo: atomic operation over shared memory ======
        SERVER> Starting atomic add on shared memory demo.
        l_uio_dev_open: No IRQ for device 3ed80000.shm.
        CLIENT> Setting up shared memory demo.
        CLIENT> Starting shared memory demo.
        CLIENT> Sending message: Hello World - libmetal shared memory demo
        CLIENT> Message Received: Hello World - libmetal shared memory demo
        CLIENT> Shared memory demo: Passed.


Libmetal IPC between RPU0 and RPU1 on Versal via OCM and/or DDR

  1. R5 Baremetal applications
    1. The source code for the demos and the binaries for RPU0 and RPU1 can be found in the attached zip file here. source for firmware can be found here 
      1. Note: the demos by default are hard-coded to use OCM for the shared memory location. To instead use DDR for the shared memory location do the following: modify within the sys_init.c the macro SHM_BASE_ADDR to 0X3ED40000UL
  2. Running the Demo
    1. SLAVE> Wait for Master.
      MASTER> Setting up shared memory demo.
      SLAVE> Demo has started.
      MASTER Starting shared memory demo.
      MASTER Message Received: test
      SLAVE> OCM Shared memory test finished
      MASTER Shared memory demo: Passed
      
      

OpenAMP demos with RPMSG in userspace with Shared Buffers and virtqueues in OCM

  1. Device Tree
    1. sample device tree for ZU+ 
      1. / {
            reserved-memory {
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;
                rproc_0_reserved: rproc@3ed00000 {
                    no-map;
                    reg = <0x0 0x3ed00000 0x0 0x1000000>;
                };
            };
            amba {
                shm0: shm@0 {
                    compatible = "shm_uio";
                    reg = <0x0 0x3ed20000 0x0 0x0100000>;
                };
                ipi0: ipi@0 {
                    compatible = "ipi_uio";
                    reg = <0x0 0xff360000 0x0 0x1000>;
                    interrupt-parent = <&gic>;
                    interrupts = <0 33 4>;
                };
                ocm0 : ocm@0 {
                      compatible = "ocm_uio";
                      reg = <0 0xfffc0000 0x0 0x4000>;
                };
            };
        };
        
        
  2. R5 Baremetal Application
    1. attached is zipped source folder for r5 baremetal application
    2. changes to make for the application:
      1. platform_info.c
        1. #define SHARED_MEM_PA   0xFFFC0000UL
          #define SHARED_MEM_SIZE    0x20000UL
      2. rsc_table.c
        #define RING_TX                     0xfffc0000
        #define RING_RX                     0xfffc4000
        #define VRING_SIZE                  64
      3. linker script:
        1. + .resource_table 0x3ed20000 : {
          +     . = ALIGN(4);
          +     *(.resource_table)
          + } > psu_ddr_S_AXI_BASEADDR
          +
  3. Linux application
    1. attached is zipped source folder for linux application
      1. modify open-amp application code to use OCM
        1. # within build dir of yocto or root petalinux project
          devtool modify open-amp
        2.  apps/machine/zynqmp/platform_info.c. Here we will update shared buffer location, shared buffer size, vring location and vring size.

          1. -#define    VRING_MEM_PA    0x3ED40000UL
            +#define    VRING_MEM_PA    0xFFFC0000UL
             
            -#define    SHARED_BUF_PA   0x3ED48000UL
            +#define    SHARED_BUF_PA   0xFFFC8000UL
             
            -#define    SHARED_BUF_SIZE 0x40000UL
            +#define    SHARED_BUF_SIZE 0x20000UL
             
            +#define    OCM_DEV_NAME          "fffc0000.ocm"
             
            /* in  struct remoteproc_priv rproc_priv = { add the following field : */
            +   .ocm_name = OCM_DEV_NAME,
            1. if communicating with RPU1, change IPI_CHN_BITMASK from 0x08 to 0x09 in this file too.
          1. apps/machine/zynqmp/platform_info.h - add OCM information to the remoteproc_priv struct
            1. +   struct remoteproc_mem ocm_mem; /**< shared memory */
              +   struct metal_io_region *ocm_io; /**< pointer to OCM i/o
              +                        region */
              +   const char *ocm_name; /**< shared memory device name */
              +   struct metal_device *ocm_dev; /**< pointer to OCM device */
          2. apps/machine/zynqmp/zynqmp_linux_r5_proc.c - add the following after adding shared memory
            1. +    prproc->ocm_dev = NULL;
              +    /* Get shared memory device */
              +    ret = metal_device_open(prproc->shm_bus_name, prproc->ocm_name,
              +                &dev);
              +    if (ret) {
              +        fprintf(stderr, "ERROR: failed to open ocm device: %d.\r\n", ret);
              +        goto err1;
              +    }
              +    printf("Successfully open ocm device.\r\n");
              +    prproc->ocm_dev = dev;
              +    prproc->ocm_io = metal_device_io_region(dev, 0);
              +    if (!prproc->ocm_io)
              +        goto err2;
              +    mem_pa = metal_io_phys(prproc->ocm_io, 0);
              +    remoteproc_init_mem(&prproc->ocm_mem, "ocm", mem_pa, mem_pa,
              +       metal_io_region_size(prproc->ocm_io), prproc->ocm_io);
              +    remoteproc_add_mem(rproc, &prproc->ocm_mem);
              +    printf("Successfully added OCM memory\r\n");
  4. Running the demo
    1. expected output is as follows:
      1. root@virt-versal:~# matrix_multiply-shared
        metal: info: Registered sinitializing rpmsg shared buffer pool
        REMOTE> Waiting for events...
        REMOTE> Message received
        REMOTE> Send the result of matrix multiplication back to master.
        REMOTE> Message received
        REMOTE> Stopping application...
        rovider linux_shm.
        metal: info: Registered shmem provider ion.reserved.
        metal: info: Registered shmem provider ion.ion_system_contig_heap.
        metal: info: Registered shmem provider ion.ion_system_heap.
        metal: info: metal_uio_dev_open: No IRQ for device 3ed20000.shm.
        Successfully open shm device.
        Successfully added shared memory
        metal: info: metal_uio_dev_open: No IRQ for device fffc0000.ocm.
        Successfully open ocm device.
        Successfully added OCM memory
        Successfully probed IPI device
        Successfully initialized Linux r5 remoteproc.
        Successfully initialized remoteproc
        Calling mmap resource table.
        pa 3ed20000 rsc_size RSC_MEM_SIZE 2000
        Successfully mmap resource table.
        Successfully set resource table to remoteproc.
        CLIENT> Compute thread unblocked ..
        CLIENT> It will generate two random matrices.
        CLIENT> Send to the remote and get the computation result back.
        CLIENT> It will then check if the result is expected.
        CLIENT> RPMSG endpoint is binded with remote.
        CLIENT> Input matrix 0
        5 9 9 7 8 3
        4 4 8 9 6 1
        1 5 7 1 0 2
        3 3 0 4 0 6
        3 9 0 6 1 2
        8 8 3 7 7 3
        CLIENT> Input matrix 1
        3 1 8 1 2 6
        4 5 3 1 6 3
        6 2 9 8 6 9
        4 9 0 5 7 3
        9 6 3 2 5 1
        5 8 4 5 0 9
        CLIENT> Matrix multiply: sent : 296
        CLIENT> Input matrix 0
        3 4 4 7 8 3
        2 6 5 3 4 1
        4 8 2 4 5 0
        9 4 8 3 8 3
        6 6 2 2 3 4
        1 7 0 6 4 0
        CLIENT> Input matrix 1
        9 8 6 4 2 0
        7 6 9 1 3 4
        3 2 1 1 7 9
        5 5 5 9 8 9
        3 9 8 5 5 2
        6 6 2 2 2 4
        CLIENT> Matrix multiply: sent : 296
        CLIENT> **********************************
        CLIENT> Test Results: Error count = 0
        CLIENT> **********************************
        CLIENT> Quitting application .. Matrix multiplication end
        CLIENT> Stopping application...
        root@virt-versal:~#
        

Linux Master running on APU with RPMsg in kernel space and 2 RPU slaves


Enabling Linux Drivers and other packages
Proceed as indicated in UG1186 to enable Linux remoteproc driver support and other openamp packages.

Device tree:
  • Add the folloing to <petalinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi
/ {
	reserved-memory {
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		rproc_0_dma: rproc@3ed40000 {
			no-map;
			compatible = "shared-dma-pool";
			reg = <0x0 0x3ed40000 0x0 0x100000>;
		};
		rproc_0_reserved: rproc@3ed00000 {
			no-map;
			reg = <0x0 0x3ed00000 0x0 0x40000>;
		};
		rproc_1_dma: rproc@3ef40000 {
			no-map;
			compatible = "shared-dma-pool";
			reg = <0x0 0x3ef40000 0x0 0x100000>;
			};
		rproc_1_reserved: rproc@3ef00000 {
			no-map;
			reg = <0x0 0x3ef00000 0x0 0x40000>;
		};
	};

	zynqmp-rpu {
		compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
		core_conf = "split";
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		reg = <0x0 0xFF9A0000 0x0 0x10000>;
		r5_0: r5@0 {
			#address-cells = <2>;
			#size-cells = <2>;
			ranges;
			memory-region = <&rproc_0_reserved>, <&rproc_0_dma>;
			pnode-id = <0x18110005>;
			mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
			mbox-names = "tx", "rx";
			tcm_0_a: tcm_0@0 {
				reg = <0x0 0xFFE00000 0x0 0x10000>;
				pnode-id = <0xf>;
			};
			tcm_0_b: tcm_0@1 {
				reg = <0x0 0xFFE20000 0x0 0x10000>;
				pnode-id = <0x10>;
			};
		};

		r5_1: r5@1 {
			#address-cells = <2>;
			#size-cells = <2>;
			ranges;
			memory-region = <&rproc_1_reserved>, <&rproc_1_dma>;
			pnode-id = <0x18110006>;
			mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
			mbox-names = "tx", "rx";
			tcm_1_a: tcm_1@0 {
				reg = <0x0 0xFFE90000 0x0 0x10000>;
				pnode-id = <0x11>;
			};
			tcm_1_b: tcm_1@1 {
				reg = <0x0 0xFFEB0000 0x0 0x10000>;
				pnode-id = <0x12>;
			};
		};
	};

	zynqmp_ipi1 {
		compatible = "xlnx,zynqmp-ipi-mailbox";
		interrupt-parent = <&gic>;
		interrupts = <0 29 4>;
		xlnx,ipi-id = <7>;
		#address-cells = <1>;
		#size-cells = <1>;
		ranges;

		/* APU<->RPU0 IPI mailbox controller */
		ipi_mailbox_rpu0: mailbox@ff990600 {
			reg = <0xff3f0ac0 0x20>,
			      <0xff3f0ae0 0x20>,
			      <0xff3f0740 0x20>,
			      <0xff3f0760 0x20>;
			reg-names = "local_request_region",
				    "local_response_region",
				    "remote_request_region",
				    "remote_response_region";
			#mbox-cells = <1>;
			xlnx,ipi-id = <1>;
		};

	};

	zynqmp_ipi2 {
		compatible = "xlnx,zynqmp-ipi-mailbox";
		interrupt-parent = <&gic>;
		interrupts = <0 30 4>;
		xlnx,ipi-id = <8>;
		#address-cells = <1>;
		#size-cells = <1>;
		ranges;

		/* APU<->RPU1 IPI mailbox controller */
		ipi_mailbox_rpu1: mailbox@ff990640 {
			reg = <0xff3f0b00 0x20>,
			      <0xff3f0b20 0x20>,
			      <0xff3f0940 0x20>,
			      <0xff3f0960 0x20>;
			reg-names = "local_request_region",
				    "local_response_region",
				    "remote_request_region",
				    "remote_response_region";
			#mbox-cells = <1>;
			xlnx,ipi-id = <2>;
		};
	};
};
 
 
  • rebuild the device tree

    petalinux-build -c device-tree
Building remote processor demo applications with Xilinx SDK:
For RPU 0 (cortex_r5_0)
Proceed as documented in UG1186 to generate remote processor openamp applications with Xilinx SDK.
RPU 0 is also used by default for the pre-built applications provided with PetaLinux BSPs.
For RPU 1 (cortex_r5_1)
Remote processor applications (echo_test, matrix multiply, rpc demo) code is by default set to run with RPU 0 and need to be slightly modified for RPU-1. When RPU-1 is selected in Xilinx SDK, the code generated need to be modified as follow:
  • check that shared memory, interrupts and linker memory regions do not overlap with rpu0↔ APU application and that the TCM banks are correct (i.e. using tcm 1 A and B if for RPU1)
  • check that linker memory regions align with device tree memory regions

Example: Running two echo_test application concurrently on Linux, each communicating to a separate RPU
  1. Use PetaLinux to build/boot your target and then login to Linux console serial port.
  2. If you haven't added the remote processor firmware applications to your Linux root filesystem (see chapter 3 of ug1186)
  3. These applications can tftp'd in the target directory /lib/firmware
  4. Check remoteproc driver is already loaded (normally it is if your device tree is properly configured): 


root@plnx_aarch64:/lib/firmware# lsmod
    Tainted: G
virtio_rpmsg_bus 20480 0 - Live 0xffffff800098e000
rpmsg_core 16384 1 virtio_rpmsg_bus, Live 0xffffff800097c000
zynqmp_r5_remoteproc 16384 0 - Live 0xffffff800096a000
remoteproc 40960 1 zynqmp_r5_remoteproc, Live 0xffffff8000959000
virtio 16384 2 virtio_rpmsg_bus,remoteproc, Live 0xffffff8000951000
virtio_ring 20480 2 virtio_rpmsg_bus,remoteproc, Live 0xffffff8000948000
uio_pdrv_genirq 16384 0 - Live 0xffffff8000940000
  1. Start RPU-0:

    root@plnx_aarch64:/lib/firmware# echo image_echo_test_r5_0 > /sys/class/remoteproc/remoteproc0/firmware
    root@plnx_aarch64:/lib/firmware#
    root@plnx_aarch64:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc0/state
    root@plnx_aarch64:/lib/firmware#
    [70982.961635] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc
    [70982.971366] remoteproc remoteproc0: Booting fw image image_echo_test_r5_0, size 638724
    [70982.985672] virtio_rpmsg_bus virtio0: rpmsg host is online
    [70982.993691] remoteproc remoteproc0: registered virtio0 (type 7)
    [70983.002197] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM.
    [7Starting application...
    Try to init remoteproc resource
    Init remoteproc resource succeeded
    Waiting for events...
    0983.012367] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up
    [70983.032821] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x1
    [70983.043731] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: rpmsg_user_dev_rpmsg_drv_probe
    root@plnx_aarch64:/lib/firmware# [70983.055535] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: new channel: 0x400 -> 0x1!
  2. Start RPU-1:

    root@plnx_aarch64:/lib/firmware# echo image_echo_test_r5_1 > /sys/class/remoteproc/remoteproc1/firmware
    root@plnx_aarch64:/lib/firmware#
    root@plnx_aarch64:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc1/state 
    [71185.157615] remoteproc remoteproc1: powering up ff9a0200.zynqmp_r5_rproc
    [71185.167453] remoteproc remoteproc1: Booting fw image image_echo_test_r5_1, size 639140
    [71185.182180] virtio_rpmsg_bus virtio1: rpmsg host is online
    [71185.190226] remoteproc remoteproc1: registered virtio1 (type 7)
    [71185.198724] zynqmp_r5_remoteproc ff9a0200.zynqmp_r5_rproc: RPU boot from TCM.
    [7Starting application...
    Try to init remoteproc resource
    Init remoteproc resource succeeded
    Waiting for events...
    1185.208915] remoteproc remoteproc1: remote processor ff9a0200.zynqmp_r5_rproc is now up
    [71185.229420] virtio_rpmsg_bus virtio1: creating channel rpmsg-openamp-demo-channel addr 0x1
    [71185.240367] rpmsg_user_dev_driver virtio1:rpmsg-openamp-demo-channel: rpmsg_user_dev_rpmsg_drv_probe
    root@plnx_aarch64:/lib/firmware# [71185.252200] rpmsg_user_dev_driver virtio1:rpmsg-openamp-demo-channel: new channel: 0x400 -> 0x1!
  3. Run echo_test Linux application with RPU-0 using either the serial port or other telnet or ssh connection:

    root@plnx_aarch64:/lib/firmware# echo_test
     Echo test start 
     Open rpmsg dev! 
    [71507.962881] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: Sent init_msg to target 0x1.
  4. Run a concurrent echo_test Linux application with RPU-1 using another connection (telnet, ssh...):

    root@plnx_aarch64:/lib/firmware# echo_test -d /dev/rpmsg1 
     Echo test start 
     Open rpmsg dev! 

Note: The order in which you start the RPU determines which /dev/rpmsgX device is being used with that RPU.
In the above case /dev/rpmsg0 is used for RPU-0.
If however RPU-1 was started first, it would have been associated with /dev/rpmsg0 and RPU-0 would have been using /dev/rpmsg1.

 Linux Master running on APU with RPMsg in userspace and 2 RPU slaves.

  1. Add following device tree content to <PetaLinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi
/ {
    reserved-memory {
        #address-cells = <2>;
        #size-cells = <2>;
        ranges;
        /* Reserved memory for both firmware and shared memory */
        rproc_0_reserved: rproc@3ed00000 {
            no-map;
            reg = <0x0 0x3ed00000 0x0 0x8000000>;
        };
    };
 
    amba {
 
 
        shm0: shm@0 {
            compatible = "shm_uio";
            reg = <0x0 0x3ed20000 0x0 0x100000>;

        shm1: shm@1 {
            compatible = "shm_uio";
            reg = <0x0 0x3fd20000 0x0 0x100000>;
        };
 
        ipi0: ipi@0 {
            compatible = "ipi_uio";
            reg = <0x0 0xff340000 0x0 0x1000>;
            interrupt-parent = <&gic>;
            interrupts = <0 29 4>;
        };
       ipi0: ipi@1 {
            compatible = "ipi_uio";
            reg = <0x0 0xff350000 0x0 0x1000>;
            interrupt-parent = <&gic>;
            interrupts = <0 30 4>;
        };
 
    };
};
 
 

2. Enable the OpenAMP and libmetal packages with "petalinux-config -c rootfs":

Filesystem Packages --->
   libs --->
       libmetal   --->
         [*] libmetal
       open-amp   --->
         [*] open-amp
 
 

Building Linux OpenAMP user space application with XSDK

  • Create Empty Application for Linux and for A53
    • OS: Linux
    • Processor: psu_cortexa53
    • Linux sysroot: the sysroot from the PetaLinux project:
      • <plnx proj>/images/linux/sdk/sysroots/aarch64-xilinx-linux
    • Once project is built, select properties:
      • C/C++ Build --> Settings
        • Tool Settings Tab Libraries
          • Libaries (-l) add "metal" and "open_amp"
          • Miscellaneous
            • in Linker Flags, add --sysroot=<plnx proj>/images/linux/sdk/sysroots/aarch64-xilinx-linux
    • Copy the following files for OpenAMP application for ZynqMP into the directory:
      • platform_info.c, platform_info.h, rsc_table.c and rsc_table.h from here
      • helper.c from here
    • And the Linux files for one of the three OpenAMP applications in the link:
      • OpenAMP echo-test
      • OpenAMP matrix multiplication Demo
      • OpenAMP RPC Demo
  • When building the application note a few configuration parameters:
    • If building Linux application to communicate with RPU 1:
      • replace the following in platform_info.c: change IPI_MASK to 0x200
    • Update the following inside of the Linux application's platform_info.c to reflect possible changes to the device tree nodes:
      • IPI_DEV_NAME
      • SHM_DEV_NAME
    • The above means that if the new shared mem entry in the device tree entry is at 0x3fd20000, then the string for the SHM_DEV_NAME should now be "3fd20000.shm" as shown in sysfs on target.
    • Update RING_TX and RING_RX to reflect the vring entry in the device tree.

Install XSDK-built Linux applications in the PetaLinux project
The Linux applications can be installed via a yocto recipe as follows:

SUMMARY = "Simple test application"
SECTION = "PETALINUX/apps"
LICENSE = "MIT"
LIC_FILES_CHKSUM =
"file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
SRC_URI = "file://<myapp>"
S = "${WORKDIR}"
INSANE_SKIP_${PN} = "arch"
do_install() {
             install -d ${D}/lib/firmware
             install -m 0644 ${S}/<myapp> ${D}/lib/firmware/<myapp>
}
FILES_${PN} = "/lib/firmware/<myapp>

Building remote processor demo applications to run on RPU 0 (cortex_r5_0) with Xilinx SDK
Proceed as documented in UG1186 to generate remote processor openamp applications with Xilinx SDK.

Building remote processor demo applications to run on RPU 1 (cortex_r5_1) with Xilinx SDK

Remote processor applications (echo_test, matrix multiply, rpc demo) code is by default set to run with RPU 0 and need to be slightly modified for RPU-1.

  • When RPU-1 is selected in Xilinx SDK, the code generated need to be modified as follow:
  • Edit rsc_table.c such that RING_TX and RING_RX are within the memory for the shared mem device tree node.
  • Check that the linker script addresses match are within reserved memory but do not overlap.
  • APU-related application is kicking the RPU-associated IPI by using the correct bitmask. For example, if RPU1 is using IPI1, then bitmask on APU kicking RPU1 should correspond to this IPI.
  • RPU application is using IPI IRQ VECT ID 64
  • RPU application is using IPI that it assoicated with
  • RPU application has bitmask that corresponds to its specific apu application's IPI.

Example: Running two proxy_app-openamp application concurrently on Linux, each communicating to one RPU

1. Generate BOOT.BIN with RPU firmware built from XSDK. Below is a BIF example:

the_ROM_image:
{
[fsbl_config] a53_x64
[bootloader] <plnx project>/images/linux/zynqmp_fsbl.elf
[destination_device=pl] <plnx project/images/linux/download.bit
[destination_cpu=pmu] <plnx project>/images/linux/pmufw.elf
[destination_cpu=r5-0] /path/to/openamp_firmware_on_rpu0.elf
[destination_cpu=r5-1] /path/to/openamp_firmware_on_rpu1.elf
[destination_cpu=a53-0, exception_level=el-3, trustzone] <plnx project>/images/linux/bl31.elf
[destination_cpu=a53-0, exception_level=el-2] <plnx project>/images/linux/u-boot.elf
}

2. When booting ,the RPU firmware for RPU 0 and RPU 1 should have output resembling the following:

Starting application...
Try to init remoteproc resource
Init remoteproc resource succeeded
Waiting for events...

3. Run proxy_app-openamp Linux application with RPU-0 using either the serial port or other telnet or ssh connection:

root@plnx_aarch64# proxy_app-openamp
metal: info: Initializating I/Os redirection...
metal: info: metal_uio_dev_open: No IRQ for device 3ed20000.shm.
Master> Remote proc resource initialized.
Master> RPMSG channel has created.
Remote>Baremetal Remote Procedure Call (RPC) Demonstration

4. Run a concurrent proxy_app-openamp Linux application with RPU-1 using another connection (telnet, ssh...):

root@xilinx-zcu102-2019_1:~# linux_proxy_userspace_to_r5_1.elf
metal: info: metal_uio_dev_open: No IRQ for device 3ef00000.vring.
metal: iInitializating I/Os redirection...
nfo: metal_uio_dev_open: No IRQ for device 3ef00000.vring.
metal: info: metal_uio_dev_open: No IRQ for device 3ef40000.shm.
Master> Remote proc resource initialized.
Master> RPMSG channel has created.
Remote>Baremetal Remote Procedure Call (RPC) Demonstration

ZynqMP Linux Master running on APU with RPMsg in kernel space and one RPU slave.

When running with RPU in split mode and only one RPU is an OpenAMP slave, the second RPU can still run another non-openamp application.

  • RPU-0 slave:
    • PetaLinux BSPs provide a default template to generate a DTB with support for OpenAMP running on RPU-0, see:
      • <PetaLinux project>/project-spec/meta-usr/recipes-bsp/device-tree/files/openamp.dtsi >
    • Add its content to file <PetaLinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi
  • RPU-1 slave:
    • Proceed as for the two RPU configuration above and edit your device tree to remove the unused 'zynmp_r5_rproc' entry and associated nodes (tcm, pd,...) that may not be needed any more.

ZynqMP Linux Master running on APU Linux loads OpenAMP RPU Firmware

Overview

The information below is intended to provide guidance to users who wish to set up a Linux on APU + Bare-metal/RTOS on RPU. This configuration relies on the FSBL to start the software running on the APU, and then APU Linux using remoteproc will load the RPU.

To Boot RPU Firmware via APU with Linux

These instructions assume the user has already generated firmware for the RPU and that the user is using PetaLinux to create their embedded Linux solution.

  • As directed in UG 1186, create an application inside of the PetaLinux project to install the firmware into the Linux host's file system in /lib/firmware. To create a template for a yocto recipe to install the firmware, do the following:
    • Create yocto application inside of PetaLinux project
petalinux-create -t apps --template install -n <app_name> --enable
  1. copy firmware (.elf file) into project-spec/meta-user/recipes-apps/<app_name>/files/ directory
  2. Modify the project-spec/meta-user/recipes-apps/<app_name>/<app_name>.bb to install the remote processor firmware in the RootFS as follows:
SUMMARY = "Simple test application"
SECTION = "PETALINUX/apps"
LICENSE = "MIT"
LIC_FILES_CHKSUM ="file:${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
SRC_URI = "file:<myfirmware>"
S = "${WORKDIR}"
INSANE_SKIP_${PN} = "arch"
do_install() {
  install -d ${D}/lib/firmware
  install -m 0644 ${S}/<myfirmware> ${D}/lib/firmware/<myfirmware>
}
FILES_${PN} = "/lib/firmware/<myfirmware>
 
 
  • Build Linux images with the "petalinux-build" command inside the PetaLinux project.
  • Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
	
/ {
	reserved-memory {
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		rproc_0_dma: rproc@3ed40000 {
			no-map;
			compatible = "shared-dma-pool";
			reg = <0x0 0x3ed40000 0x0 0x100000>;
		};
		rproc_0_reserved: rproc@3ed00000 {
			no-map;
			reg = <0x0 0x3ed00000 0x0 0x40000>;
		};

	};

	zynqmp-rpu {
		compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
		core_conf = "split";
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		reg = <0x0 0xFF9A0000 0x0 0x10000>;
		r5_0: r5@0 {
			#address-cells = <2>;
			#size-cells = <2>;
			ranges;
			memory-region = <&rproc_0_reserved>, <&rproc_0_dma>;
			pnode-id = <0x7>;
			mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
			mbox-names = "tx", "rx";
			tcm_0_a: tcm_0@0 {
				reg = <0x0 0xFFE00000 0x0 0x10000>;
				pnode-id = <0xf>;
			};
			tcm_0_b: tcm_0@1 {
				reg = <0x0 0xFFE20000 0x0 0x10000>;
				pnode-id = <0x10>;
			};
		};

	};

	zynqmp_ipi1 {
		compatible = "xlnx,zynqmp-ipi-mailbox";
		interrupt-parent = <&gic>;
		interrupts = <0 33 4>;
		xlnx,ipi-id = <5>;
		#address-cells = <1>;
		#size-cells = <1>;
		ranges;

		/* APU<->RPU0 IPI mailbox controller */
		ipi_mailbox_rpu0: mailbox@ff990600 {
			reg = <0xff3f0ac0 0x20>,
			      <0xff3f0ae0 0x20>,
			      <0xff3f0740 0x20>,
			      <0xff3f0760 0x20>;
			reg-names = "local_request_region",
				    "local_response_region",
				    "remote_request_region",
				    "remote_response_region";
			#mbox-cells = <1>;
			xlnx,ipi-id = <3>;
		};
	};
};

Run the following to build your PetaLinux project.

petalinux-build

After booting the PetaLinux project, run the following to boot the RPU firmware onto RPU.

echo <name of firmware> > /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
  • Run Linux application
  • Stop firmware

ZynqMP Linux loads RPU, Linux OpenAMP Application talks to RPU OpenAMP Application

Overview

Setting up Remote Firmware

The user can use, for example, a similar structure as the OpenAMP RPU applications created in the Building Remote Applications sections of UG1186 .

To Boot RPU Firmware via APU with Linux

These instructions assume the user has already generated firmware for the RPU and that the user is using PetaLinux to create their embedded Linux solution.

  • As directed in UG 1186, create a yocto recipe inside of the PetaLinux project to install the firmware into the Linux host's file system in /lib/firmware.

Refer to the previous example: "ZynqMP Linux Master running on APU Linux loads OpenAMP RPU Firmware" for a guide on how to create such a yocto recipe.

  • Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
/ {
	reserved-memory {
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		rproc_0_dma: rproc@3ed40000 {
			no-map;
			compatible = "shared-dma-pool";
			reg = <0x0 0x3ed40000 0x0 0x100000>;
		};
		rproc_0_reserved: rproc@3ed00000 {
			no-map;
			reg = <0x0 0x3ed00000 0x0 0x40000>;
		};

	};

	zynqmp-rpu {
		compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
		core_conf = "split";
		#address-cells = <2>;
		#size-cells = <2>;
		ranges;
		reg = <0x0 0xFF9A0000 0x0 0x10000>;
		r5_0: r5@0 {
			#address-cells = <2>;
			#size-cells = <2>;
			ranges;
			memory-region = <&rproc_0_reserved>, <&rproc_0_dma>;
			pnode-id = <0x7>;
			mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
			mbox-names = "tx", "rx";
			tcm_0_a: tcm_0@0 {
				reg = <0x0 0xFFE00000 0x0 0x10000>;
				pnode-id = <0xf>;
			};
			tcm_0_b: tcm_0@1 {
				reg = <0x0 0xFFE20000 0x0 0x10000>;
				pnode-id = <0x10>;
			};
		};

	};

	zynqmp_ipi1 {
		compatible = "xlnx,zynqmp-ipi-mailbox";
		interrupt-parent = <&gic>;
		interrupts = <0 29 4>;
		xlnx,ipi-id = <7>;
		#address-cells = <1>;
		#size-cells = <1>;
		ranges;

		/* APU<->RPU0 IPI mailbox controller */
		ipi_mailbox_rpu0: mailbox@ff990600 {
			reg = <0xff3f0ac0 0x20>,
			      <0xff3f0ae0 0x20>,
			      <0xff3f0740 0x20>,
			      <0xff3f0760 0x20>;
			reg-names = "local_request_region",
				    "local_response_region",
				    "remote_request_region",
				    "remote_response_region";
			#mbox-cells = <1>;
			xlnx,ipi-id = <1>;
		};
	};
};

Build PetaLinux with required packages
1. Enable the required packages with PetaLinux tools. For example, go to your PetaLinux project's topmost directory and start the rootfs configuration utility:

petalinux-config -c rootfs

2. Enable the required rootfs packages for the application. If you are running the sample applications from UG1186, the packages would be enabled by the following:

Filesystem Packages
--> libs
--> libmetal
--> [ * ] libmetal
--> openamp
--> [ * ] open-amp
--> misc
--> openamp-fw-echo-testd
--> [ * ] openamp-fw-echo-testd
--> openamp-fw-mat-muld
--> [ * ] openamp-fw-mat-muld
--> openamp-fw-rpc-demod
--> [ * ] openamp-fw-rpc-demod
--> rpmsg-echo-test
--> [ * ] rpmsg-echo-test
--> rpmsg-mat-mul
--> [ * ] rpmsg-mat-mul
--> rpmsg-proxy-app
--> [ * ] rpmsg-proxy-app


3. Then build the PetaLinux project.

petalinux-build

Load firmware and start the firmware step-by-step:


echo <fw_name> /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state

Stop firwmare

echo stop > /sys/class/remoteproc/remoteproc0/state

ZynqMP on APU Linux communicate with RPU via Shared Memory without OpenAMP

Overview

The information below is intended to provide guidance to users who wish to set up a Linux + Bare-metal,RTOS, etc. We make the assumption that the Linux and RPU will communicate via Shared Memory. IPI can be used to further coordinate communication between processors. The use of IPI is documented in the section titled

  • Device Tree Settings for Linux
    • To make the shared memory device accessible to Linux running on APU, there must be some modifications in the device tree.
    • For example, if configuring the device tree for the OpenAMP echo_test demo found here , a shared memory node is placed in the amba section. E.g.
/{
  amba {
    /* UIO device node for shared memory device memory */
    shm0: shm@0 {
      compatible = "shm_uio";
      reg = <0x0 0x3ed80000 0x0 0x80000>;
    };
  };
};

Configuring the PetaLinux project

The OpenAMP applications use Libmetal to access shared memory. Thus libmetal package in your PetaLinux project must be enabled. This package can be enabled by going into the rootfs by using the petalinux-config utility.

run:

petalinux-config -c rootfs

and then in the utility enable the following packages:

Filesystem Packages
--> libs
--> libmetal
--> [ * ] libmetal
--> openamp
--> [ * ] open-amp
--> misc
--> openamp-fw-echo-testd
--> [ * ] openamp-fw-echo-testd
--> openamp-fw-mat-muld
--> [ * ] openamp-fw-mat-muld
--> openamp-fw-rpc-demod
--> [ * ] openamp-fw-rpc-demod
--> rpmsg-echo-test
--> [ * ] rpmsg-echo-test
--> rpmsg-mat-mul
--> [ * ] rpmsg-mat-mul
--> rpmsg-proxy-app
--> [ * ] rpmsg-proxy-app


Communicating via Shared memory

The below information is constructed with the assumptoin that the shared memory node is visible in Linux userspace.

Using the Libmetal API, we can read from and write to shared memory with the following functions:

static inline uint64_t metal_io_read(struct metal_io_region *io, unsigned long offset, memory_order order, int width);
int metal_io_block_read(struct metal_io_regoin *io, unsigned long offset, void * restrict dst, int len);

and 

static inline void metal_io_write(struct metal_io_region *io, unsigned long offset, uint64_t value, memory_order order, int width);
int metal_io_block_write(struct metal_io_region *io, unsigned long offset, const void *restrict src, int len);


An example showing the use of these functions in Linux userspace can be found  here .  At the link are some examples showing the use of reading from, and writing to shared memory as well as initialization and cleanup of Libmetal resources.
How to Generate BOOT.BIN

This section assumes that your PetaLinux project has already run Petalinux-build to build all the necessary components for your embedded Linux solution in addition to firmware to run on an RPU. Use PetaLinux tools to construct the BOOT.BIN that you can then put on an SD card to boot your ZynqMP board. Below is a sample bootgen.bif file that you can create or modify in the top level directory of your PetaLinux project that you can use to help construct the BOOT.BIN:

the_ROM_image:
{
[fsbl_config] a53_x64
[bootloader, destination_cpu=a53-0] ./images/linux/zynqmp_fsbl.elf
[pmufw_image, destination_cpu=a53-0] ./images/linux/pmufw.elf
[destination_cpu=a53-0, exception_level=el-3, trustzone] ./images/linux/bl31.elf
[destination_cpu=a53-0, exception_level=el-2] ./images/linux/u-boot.elf
}

Using this .bif file and PetaLinux tools, we will build a BOOT.BIN file that you can use for your ZynqMP board.

petalinux-package --boot --force --u-boot ./images/linux/u-boot.elf --cpu r5-0 --add /path/to/firmware

Here we have shown a few things:

  •  We specify to which RPU the data file (your firmware) will go with the --cpu option and r5-0. You can also use r5-1 or r5-lockstep options.
  •  The --add option where the following argument specifies the path to your firmware.
  •  The --force option overwrites the existing BOOT.BIN file into the current directory.
  •  The --u-boot option that specifies the location of the u-boot.elf

Feature Changes

Module NameChangeLink
OpenAMP echo test and matrix multiplication demosRun without manual input from user. Instead, add -n command line option to specify number of iterationsLink to patch
Xen Dom0 and DomU support for OpenAMP running in RPMsg userspacesupport for these two configurations in 2018.1
RPMsg char driver support (experimental)Able to run RPMsg OpenAMP demos using rpmsg char driver. The patch enabling this is not yet in upstream.Link to patch
libmetal MicroBlaze baremetal supportAdd MicroBlaze baremetal support to libmetalLink to source code
Add fix for OpenAMP echo demo running on RPU0 and RPU1 hanging for large payload numberUpdate GIC driver in embeddedsw such that relevant interrupts can be mapped or unmapped to CPU.Link to patch