Versal Adaptive SoC CCIX-PCIe Module (CPM) Root port Linux driver

Versal Adaptive SoC CCIX-PCIe Module (CPM) Root port Linux driver

This page provides details for using the Root Port Linux Driver for the Versal Integrated Block for PCIe (CPM4/CPM5).

Table of Contents

Introduction

The Versal Adaptive SoC devices include CCIX-PCIe Module (CPM). The integrated block for CPM4/CPM5 PCIe along with the integrated bridge can function as PCIe Root Port with up to x16 Gen4 link configuration for CPM4 and Gen5 x8 for CPM5.

There are two integrated PCIe controllers (each capable of x8 maximum link width) and only one of them has access to the integrated bridge required for root mode. This page discusses the use of integrated PCIe controller with integrated bridge in CPM4/CPM5 in root mode.

The integrated bridge has connectivity to memory via NoC and to PS via AXI-MM interface.

The interrupts (legacy/MSI/MSI-X) received from endpoints are routed to PS. The GIC Interrupt translation services (ITS) feature is enabled to process MSI/MSI-X interrupts.

The Linux CPM driver (pcie-xilinx-cpm.c) provides host controller support for Xilinx Versal ACAP devices. This driver manages PCIe connectivity and implements the Linux PCI host bridge infrastructure for CPM-based systems (i.e CPM4, CPM5) whereas CPM6 uses pcie-amd-mdb.c driver. This driver does not support PL-based PCIe implementations like QDMA or XDMA.

For more information, refer to PG346 and PG347.

Features Supported in HW

  • ECAM for enumeration

  • Non-Pre-fetchable memory (32-bit)

  • Pre-fetchable memory (64-bit)

  • Legacy interrupts

  • MSI/MSI-X via GIC ITS feature

Known Issues:

  1. For CPM4, refer AR72972 and AR76652 (This AR applies from 2021.1 and above releases).

  2. For CPM5, refer AR000033976 for 2022.1 both driver and device tree patches needed (For 2022.2 only device tree needed, driver is upstreamed).

Features Supported in Driver

  • Enumeration of PCIe hierarchy

  • Assign Non-Pre-fetchable memory

  • Assign Pre-fetchable memory

  • Handle legacy interrupt

Driver Use Cases and Applications

Primary Use Case

This driver should be used when:

  • Running Linux on a Xilinx Versal ACAP device.

  • The Versal device acts as a PCIe Root Complex/Host.

  • users need to connect external PCIe devices (endpoints) to the Versal system.

  • The system uses CPM hard IP blocks for PCIe connectivity.

Target Deployment

  • This driver runs on the host OS (Linux on Versal) and is not intended for endpoint devices. it is specifically designed as a host/root complex driver.

  • Suitable for embedded systems.

  • Real-time error detection and reporting

Supported Hardware Configurations

This driver supports multiple CPM variants:

CPM Version

Compatible String

Description

CPM Version

Compatible String

Description

CPM

xlnx,versal-cpm-host-1.00

Original CPM implementation

CPM5

xlnx,versal-cpm5-host

Enhanced CPM5 with additional features

CPM5_HOST1

xlnx,versal-cpm5-host1

Secondary CPM5 host instance

Kernel Configuration

  1. For CPM4, Enable CONFIG_PCIE_XDMA_PL option till 2021.1 release. From 2021.2 use the CONFIG_PCIE_XILINX_CPM driver. (This is up streamed driver)

  2. For CPM5, Enable CONFIG_PCIE_XILINX_CPM.

Driver Features and Capabilities

Interrupt Management

The driver provides comprehensive interrupt support:

// Example interrupt types supported static const struct {    const char *sym;     const char *str; } intr_cause[32] = {     _IC(LINK_DOWN,        "Link Down"),     _IC(HOT_RESET,        "Hot reset"),      _IC(CORRECTABLE,      "Correctable error message"),     _IC(NONFATAL,         "Non fatal error message"),     _IC(FATAL,            "Fatal error message"),     _IC(CFG_TIMEOUT,      "ECAM access timeout"),     // ... additional interrupt types };

Interrupt Support Details

  • Legacy INTx: Full support with dedicated IRQ domain

  • MSI/MSI-X: Supported through standard Linux PCI MSI framework

  • Error Interrupts: Comprehensive error reporting and handling

  • Event Interrupts: Link state changes, power management events

Link Control

// Link status monitoring static bool cpm_pcie_link_up(struct xilinx_cpm_pcie *port) {     return (pcie_read(port, XILINX_CPM_PCIE_REG_PSCR) &             XILINX_CPM_PCIE_REG_PSCR_LNKUP); }

Device Tree

 cpm_pciea:@fca10000 { #address-cells = <3>; #interrupt-cells = <1>; #size-cells = <2>; compatible = "xlnx,versal-cpm-host-1.00"; interrupt-map = <0 0 0 1 &pcie_intc_0 1>, <0 0 0 2 &pcie_intc_0 2>, <0 0 0 3 &pcie_intc_0 3>, <0 0 0 4 &pcie_intc_0 4>; interrupt-map-mask = <0 0 0 7>; interrupt-parent = <&gic>; interrupt-names = "misc"; interrupts = <0 72 4>; ranges = <0x02000000 0x00000000 0xE0000000 0x0 0xE0000000 0x00000000 0x10000000>, <0x43000000 0x00000080 0x00000000 0x00000080 0x00000000 0x00000000 0x80000000>; msi-map = <0x0 &its_gic 0x0 0x10000>; reg = <0x6 0x00000000 0x0 0x1000000>, <0x0 0xFCA10000 0x0 0x1000>; reg-names = "cfg", "cpm_slcr"; pcie_intc_0: pci-interrupt-controller { #address-cells = <0>; #interrupt-cells = <1>; interrupt-controller ; }; };

Key Device Tree Properties

This section provides step-by-step instructions for key device tree properties.

  1. Device Tree Node: cpm_pciea:@fca10000

    1. This node describes the CPM PCIe Host Controller in a Versal ACAP device. It configures how the Linux kernel interacts with the PCIe hardware block.

  2. Compatible: "xlnx,versal-cpm-host-1.00";

    1. Matches with the driver that supports the Versal CPM PCIe Host Controller.

  3. reg and reg-names:

    reg = <0x6 0x00000000 0x0 0x1000000>, <0x0 0xFCA10000 0x0 0x1000>; reg-names = "cfg", "cpm_slcr"
    1. reg: defines memory-mapped regions:

      • cfg: PCIe configuration space.

      • cpm_slcr: System-level control and status registers for CPM.

    2. These are used by the driver to access hardware registers.

  4. #address-cells, #size-cells, #interrupt-cells:

    1. Define how child node (interrupt controller) properties are specified.

  5. ranges:

    ranges = <0x02000000 0x00000000 0xE0000000 0x0 0xE0000000 0x00000000 0x10000000>, <0x43000000 0x00000080 0x00000000 0x00000080 0x00000000 0x00000000 0x80000000>;
    1. Ranges property format represented below:

      1. For 1st entry:

        <PCIe address space type> <PCIe high addr> <PCIe low addr> <CPU high addr> <CPU low addr> <size high> <size low>
      2. 0x02000000: PCIe Memory space (Non-prefetchable)

      3. 0x00000000 0xE0000000: PCIe address = 0xE0000000

      4. 0x0 0xE0000000: CPU/system address = 0xE0000000

      5. 0x00000000 0x10000000: Size = 0x10000000 (256 MB)

    2. Means: This maps 256 MB of PCIe memory space starting at 0xE0000000 to address in the CPUs address space. This is typically used for BAR regions of PCIe devices (e.g., NVMe controller registers or buffers).

    3. For 2nd Entry:

      <PCIe address space type> <PCIe high addr> <PCIe low addr> <CPU high addr> <CPU low addr> <size high> <size low>
      1. 0x43000000: PCIe pre-fetchable memory space

      2. 0x00000080 0x00000000: PCIe address = 0x8000000000

      3. 0x00000080 0x00000000: CPU/system address = 0x8000000000

      4. 0x00000000 0x80000000: Size = 0x80000000 (2 GB)

    4. Means: This maps 2 GB of Pre-fetchable PCIe memory starting at 0x8000000000 to address in the CPUs address space.

    5. Maps PCIe address space to CPU address space.

    6. Enables translation between PCIe and system memory for DMA and BAR access.

  6. interrupts and interrupt-names:

    interrupts = <0 72 4>; interrupt-names = "misc";
    1. Defines the interrupt line used by the CPM PCIe controller.

    2. "misc" is a label for this interrupt

  7. Interrupt-parent = <&gic>:

    1. Specifies Generic Interrupt Controller (GIC) as interrupt controller.

  8. Interrupt-map and Interrupts-map-mask:

    interrupt-map = <...>; interrupt-map-mask = <0 0 0 7>
    1. Maps PCIe device interrupts to system interrupt controllers

    2. Specifies which bits of the child address/interrupt are significant for mapping.

  9. msi-map = <0x0 &its_gic 0x0 0x10000>;

    1. Maps MSI (Message Signaled Interrupts) to the Interrupt Translation Service (ITS) in GICv3.

    2. Enables MSI support for PCIe devices.

  10. Child Node: pcie_intc_0

    pcie_intc_0: pci-interrupt-controller {     #address-cells = <0>;     #interrupt-cells = <1>;     interrupt-controller; };
    1. Declares a PCIe interrupt controller.

    2. Used to handle legacy INTx interrupts from PCIe devices.

Tested End Points

  • Samsung 970 Evo Plus NVMe SSD

  • Intel NVMe 750 SSD

  • WD SN 750 NVMe SSD

  • Intel NIC card

Test Procedure

This depends on end point functionality and use case.

The following test procedure shows NVMe device testing (Enable BLK_DEV_NVME in kernel configuration)

  • Boot linux onto Versal.

  • Check lspci to confirm NVMe card is detected and do the following steps.

    • mount /dev/nvmeXX  /mnt

    • cd /mnt

    • dd if=/dev/zero of=tmp.txt bs=4096 count=200000

Kernel Console Output

Driver Initialization for Versal

[ 2.897050] NFS: Registering the id_resolver key type [ 2.902165] Key type id_resolver registered [ 2.906381] Key type id_legacy registered [ 2.910435] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.917201] jffs2: version 2.2. (NAND) (SUMMARY) © 2001-2006 Red Hat, Inc. [ 2.953047] NET: Registered protocol family 38 [ 2.957537] xor: measuring software checksum speed [ 2.964107] 8regs : 5654 MB/sec [ 2.970004] 32regs : 6520 MB/sec [ 2.976080] arm64_neon : 5834 MB/sec [ 2.980468] xor: using function: 32regs (6520 MB/sec) [ 2.985557] Key type asymmetric registered [ 2.989682] Asymmetric key parser 'x509' registered [ 2.994612] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247) [ 3.002069] io scheduler mq-deadline registered [ 3.006633] io scheduler kyber registered [ 3.011024] xilinx-cpm-pcie fca10000.pci: host bridge /axi/pci@fca10000 ranges: [ 3.018401] xilinx-cpm-pcie fca10000.pci: No bus range found for /axi/pci@fca10000, using [bus 00-ff] [ 3.027888] xilinx-cpm-pcie fca10000.pci: MEM 0x00e0010000..0x00f000ffff -> 0x00e0010000 [ 3.036491] xilinx-cpm-pcie fca10000.pci: MEM 0x8000000000..0x807fffffff -> 0x8000000000 [ 3.045128] xilinx-cpm-pcie fca10000.pci: ECAM area [mem 0x600000000-0x600ffffff] can only accommodate [bus 00-0f] (reduced from [bus 00-ff] desired) [ 3.058640] xilinx-cpm-pcie fca10000.pci: ECAM at [mem 0x600000000-0x600ffffff] for [bus 00-0f] [ 3.067411] xilinx-cpm-pcie fca10000.pci: PCIe Link is UP [ 3.073350] xilinx-cpm-pcie fca10000.pci: PCI host bridge to bus 0000:00 [ 3.080105] pci_bus 0000:00: root bus resource [bus 00-ff] [ 3.085633] pci_bus 0000:00: root bus resource [mem 0xe0010000-0xf000ffff] [ 3.092562] pci_bus 0000:00: root bus resource [mem 0x8000000000-0x807fffffff] [ 3.099862] pci 0000:00:00.0: [10ee:b03f] type 01 class 0x060400 [ 3.105920] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x7fffffff] [ 3.112269] pci 0000:00:00.0: supports D1 [ 3.116309] pci 0000:00:00.0: PME# supported from D0 D1 D3hot D3cold [ 3.123156] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring [ 3.131292] pci 0000:01:00.0: [144d:a80a] type 00 class 0x010802 [ 3.137360] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit] [ 3.144706] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01 [ 3.151383] pci 0000:00:00.0: BAR 0: no space for [mem size 0x80000000] [ 3.158051] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x80000000] [ 3.165068] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe01fffff] [ 3.171912] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe0103fff 64bit] [ 3.179288] pci 0000:00:00.0: PCI bridge to [bus 01] [ 3.184290] pci 0000:00:00.0: bridge window [mem 0xe0100000-0xe01fffff] [ 3.191565] ps_pcie_dma init() [ 3.210174] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled [ 3.217174] Serial: AMBA driver [ 3.221035] cacheinfo: Unable to detect cache hierarchy for CPU 0 [ 3.230203] brd: module loaded [ 3.236245] loop: module loaded [ 3.240072] nvme nvme0: pci function 0000:01:00.0 [ 3.244846] pci 0000:00:00.0: enabling device (0000 -> 0002) [ 3.245005] mtdoops: mtd device (mtddev=name/number) must be supplied [ 3.250559] nvme 0000:01:00.0: enabling device (0000 -> 0002) [ 3.257851] libphy: Fixed MDIO Bus: probed [ 3.265491] nvme nvme0: Shutdown timeout set to 10 seconds [ 3.267748] tun: Universal TUN/TAP device driver, 1.6 [ 3.277712] CAN device driver interface [ 3.278125] nvme nvme0: 2/0/0 default/read/poll queues [ 3.281846] usbcore: registered new interface driver asix [ 3.292233] usbcore: registered new interface driver ax88179_178a

 Testing of NVMe Access and Interrupts

root@xilinx:~# lspci 00:00.0 PCI bridge: Xilinx Corporation Device a03f 01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 root@xilinx:~# mount /dev/nvme0n1 /mnt/ root@xilinx:~# cd /mnt root@xilinx:~# dd if=/dev/zero of=tmp.txt bs=8192 count=200000 200000+0 records in 200000+0 records out root@xilinx:~# cat /proc/interrupts CPU0 CPU1 3: 2741 8093 GICv3 30 Level arch_timer 9: 0 0 GICv3 92 Level zynqmp-dma 10: 0 0 GICv3 93 Level zynqmp-dma 11: 0 0 GICv3 94 Level zynqmp-dma 12: 0 0 GICv3 95 Level zynqmp-dma 13: 0 0 GICv3 96 Level zynqmp-dma 14: 0 0 GICv3 97 Level zynqmp-dma 15: 0 0 GICv3 98 Level zynqmp-dma 16: 0 0 GICv3 99 Level zynqmp-dma 21: 0 0 GICv3 47 Level cdns-i2c 22: 0 0 GICv3 174 Level f12a0000.rtc 23: 0 0 GICv3 175 Level f12a0000.rtc 24: 0 0 GICv3 160 Level mmc0 25: 1114 0 GICv3 50 Level uart-pl011 27: 3 0 GICv3 157 Level f1030000.spi 28: 0 0 GICv3 104 Level xilinx-pcie 29: 0 0 GICv3 62 Level zynqmp_ipi 31: 10 0 ITS-MSI 524288 Edge nvme0q0 32: 730 0 ITS-MSI 524289 Edge nvme0q1 33: 0 1741 ITS-MSI 524290 Edge nvme0q2 IPI0: 1881 1446 Rescheduling interrupts IPI1: 1289 465 Function call interrupts IPI2: 0 0 CPU stop interrupts IPI3: 0 0 CPU stop (for crash dump) interrupts IPI4: 0 0 Timer broadcast interrupts IPI5: 0 0 IRQ work interrupts IPI6: 0 0 CPU wake-up interrupts Err: 0

 Debug Information

  • If Link is Down:

    1. Check if EP is connected properly and all components in setup are proper.

    2. Try changing EP device.

    3. Test and confirm if EP is in working condition by connecting and checking on different host (Ex: x86).

    4. If using a switch, check if switch settings for the slot are correct or not. Try changing switch slots.

  • If endpoint not detected/enumerated:

    1. Try using a different endpoint device.

    2. If using a switch, check if switch settings for the slot are correct or not. Try changing switch slots.

  • If issue still persist raise support request along with boot log with lspci -vvv and design, EP details and release version used.

Why PL IP’s (QDMA/XDMA) is Not Supported in This Driver:

  1. Different IP Cores: This driver manages the CPM Hard IP which provides basic PCIe connectivity. PL IPs (QDMA/ XDMA) is a Separate Soft IP core that provides advanced DMA capabilities.

  2. Driver Separation: QDMA has its own dedicated driver (pcie-xilinx-dma-pl.c) in the Xilinx ecosystem because:

    • XDMA/QDMA requires specialized register programming

    • Different interrupt handling mechanisms

    • Complex queue management not suitable for generic PCIe host drivers

  3. Use Case Distinction:

    • CPM Driver: Provides basic PCIe host bridge functionality

    • QDMA/ XDMA Driver: Provides high-performance DMA engines for data movement


Change Log

  • 2019.2

    • Add support for Versal CPM4 root port driver

  • 2020.1

    • None

  • 2020.2

    • None

  • 2021.1

    • None

  • 2021.2

    • Added support for Versal CPM4 root port mainline driver.

  • 2022.1

    • Added support for Versal CPM5 root port patch.

  • 2022.2

    • Added support for Versal CPM5 root port mainline driver.

 

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy