Linux ZynqMP PS-PCIe Root Port Driver

This page gives an overview of Root Port driver for the controller for PCI Express, which is available as part of the ZynqMP processing system.

Table of Contents

Controller for PCI Express

PCI Express (abbreviated as PCIe) is the newest bus standard designed to replace the old PCI/PCI-X and AGP standards. PCIe is used in servers, consumer, and industrial applications either as a motherboard-level interconnection to link peripherals or as an expansion card interface for add on boards.  The Zynq® UltraScale+™ MPSoC provides a controller for the integrated block for PCI Express® v2.1 compliant, AXI-PCIe bridge, and DMA modules. The AXI-PCIe bridge provides high-performance bridging between PCIe and AXI.

The controller for PCIe supports both Endpoint and Root Port modes of operations and provides support up to x4 Gen2 links.

For more information about controller for PCI Express, please refer Zynq UltraScale+ MPSoC TRM (UG1085).

Hardware Setup

The details here are targeted to ZCU102 hardware platform.
PetaLinux ZCU102 BSP provides x2 Gen2 FSBL by default. For other link configurations, appropriate FSBL should be generated via PCW in Vivado.



Tested End Point cards:
1. Broadcom PCIe NIC card
2. Realtek NIC card
3. KC705, KCU105, VCU108 with PIO designs (Xilinx PCIe Endpoint Example designs)
4. Intel NVMe SSD
5. Intel NIC card
6. PCIe-SATA
7. PCIe-USB
8. PLX Switch with Endpoint

Root Port Driver Configuration

The PCI/PCIe subsystem support and Root Port driver is enabled by default in ZynqMP kernel configuration. The related code is always built with the kernel. So, the user does not need to change anything in the configuration files to bring in PCIe support into ZynqMP kernel. The driver is built with Message Signaled Interrupts (MSI) support in default configuration.

The driver is available at:
https://github.com/Xilinx/linux-xlnx/blob/master/drivers/pci/controller/pcie-xilinx-nwl.c

End Point Driver Configuration


This page demonstrates the Root Port driver using Broadcom NIC Endpoint, for which NIC driver should be enabled in kernel as shown,




The following driver is needed to be enabled for using NVMe SSD as EndPoint,




For other End point cards, ensure the respective driver is loaded into kernel.

Rootfs Configuration


Enable lspci utility in rootfs under "Filesystem Packages->console/utils->pciutils->libpci3/pciutils/pciutils-ids/pciutils-lic"

Device Tree binding


The device tree node for ZynqMP PCIe core will be automatically generated, if the core is configured in the HW design, using the Device Tree BSP.

Steps to generate device-tree is documented here,
http://www.wiki.xilinx.com/Build+Device+Tree+Blob

And a sample binding is shown below and the description of DT property is documented here


Test Procedure

  • Load the linux onto ZCU102.
  • After successful booting of the linux, will be able to see the Broadcom NIC endpoint driver is been probed. Run the command ‘lspci’ from the user prompt, which shows the the device id and manufacturer id of the Broadcom NIC card. This ensures the enumeration of the device is fine.
  • The Broadcom NIC is shown up as an eth1 interface in the linux. Run the command ‘ifconfig eth1 up’ which brings up the Ethernet interface. This step ensures all the memory transactions on the PCIe bus are working.
  • Assign a IP address either static/dhcp.
  • Ping to a device of known IP address.
  • With MSI supported linux image, it can be observed that increase in MSI interrupts count (cat /proc/interrupts).

Kernel Console Output


Driver Initialization for ZynqMP


Broadcom NIC card probing

lspci output

Ethernet Interface

Testing of Ethernet interface

MSI Interrupts

Note:
AER services with PS PCIe as Root Port have not been tested yet.

Related Links