Linux Soft DMA Driver

This page covers the Linux driver for the Xilinx Soft DMA IPs, including AXI DMA, AXI CDMA, AXI MCMDA and AXI VDMA for Zynq, Zynq Ultrascale+ MPSoC, Versal and Microblaze.


Table of Contents

Introduction


The Soft IP DMA (AXI DMA/CDMA/MCDMA/VDMA) driver is available as part of the Xilinx Linux distribution and in open source Linux as drivers/dma/xilinx_dma.c

HW IP features

AXI DMA

The AXI Direct Memory Access (AXI DMA) IP provides high-bandwidth direct memory access between the AXI4 memory mapped and AXI4-Stream-type target peripherals. Its optional scatter-gather capabilities also offload data movement tasks from the Central Processing Unit (CPU) in processor-based systems. Initialization, status, and management registers are accessed through an AXI4-Lite slave interface.
  • AXI4 and AXI4-Stream compliant
  • Optional Scatter/Gather (SG) DMA support. When Scatter/gather mode is not selected the IP operates in Simple DMA mode.
  • Primary AXI4 Memory Map and AXI4-Stream data width support of 32, 64, 128, 256, 512, and 1024 bits
  • Optional Data Re-Alignment Engine
  • Optional AXI Control and Status Streams
  • Optional Keyhole support
  • Optional Micro DMA mode support
  • Support for up to 64-bit Addressing

Features supported in the driver

  • Optional Scatter/Gather (SG) DMA support. When Scatter/gather mode is not selected the IP operates in Simple DMA mode.
  • Primary AXI4 Memory Map and AXI4-Stream data width support of 32, 64, 128, 256, 512, and 1024 bits
  • Optional Data Re-Alignment Engine
  • Optional AXI Control and Status Streams
  • 64-bit Addressing support

Note: Multi-channel mode is no longer a supported mode of operation for AXI DMA.

AXI CDMA

The AXI CDMA provides high-bandwidth direct memory access (DMA) between a memory mapped source address and a memory mapped destination address using the AXI4 protocol. An optional Scatter Gather (SG) feature can be used to offload control and sequencing tasks from the System CPU. Initialization, status, and control registers are accessed through an AXI4-Lite slave interface.
  • AXI4 Compliant
  • Primary AXI Memory Map data width support of 32, 64, 128, and 256 bits
  • Primary AXI Stream data width support of 8, 16, 32, 64, 128, and 256 bits
  • Optional Data Re-Alignment Engine
  • Optional Gen-Lock Synchronization
  • Independent, asynchronous channel operation
  • Provides Simple DMA only mode and an optional hybrid mode supporting both Simple DMA and Scatter-Gather automation
  • Optional Store and Forward operation mode with internal Data FIFO (First In First Out)

Features supported in the driver

  • Optional Scatter/Gather (SG) DMA support. When Scatter/gather mode is not selected the IP operates in Simple DMA mode.
  • Primary AXI4 Memory Map and AXI4-Stream data width support of 32, 64, 128, 256, 512, and 1024 bits
  • Optional Data Re-Alignment Engine
  • 64-bit Addressing Support
  • Simple DMA mode
  • Scatter-Gather DMA mode

AXI VDMA

The AXI Video Direct Memory Access (AXI VDMA) core is a soft Xilinx IP core that provides high-bandwidth direct memory access between memory and AXI4-Stream type video target peripherals. The core provides efficient two dimensional DMA operations with independent asynchronous read and write channel operation. Initialization, status, interrupt and management registers are accessed through an AXI4-Lite slave interface.
  • High-bandwidth direct memory access for video streams
  • Efficient two-dimensional DMA operations
  • Independent, asynchronous read and write channel operation
  • Gen-Lock frame buffer synchronization
  • Supports a maximum of 32 frame buffers
  • Supports dynamic video format changes
  • Configurable Burst Size and Line Buffer depth for efficient video streaming
  • Processor accessible initialization, status, interrupt and management registers
  • Primary AXI Stream data width support for multiples of 8-bits: 8, 16, 24, 32, etc. up to 1024 bits
  • 64-bit Addressing

Features supported in the driver

  • Support for maximum 32 frame buffers
  • 64-bit Addressing
  • Gen-lock frame buffer synchronization


AXI MCMDA

The AXI Multichannel Direct Memory Access (AXI MCDMA) core is a soft Xilinx IP core for use with the Xilinx Vivado Design Suite. The AXI MCDMA provides high-bandwidth direct memory access between memory and AXI4-Stream target peripherals. The AXI MCDMA core provides Scatter Gather (SG) interface with multiple channel support with independent configuration

  • AXI4 data width support of 32, 64, 128, 256, 512, and 1,024 bits
  • AXI4-Stream data width support of 8, 16, 32, 64, 128, 256, 512, and 1,024 bits
  • Supports up to 16 independent channels
  • Supports per Channel Interrupt output
  • Supports data realignment engine (DRE) alignment for streaming data width of up to 512 bits
  • Supports up to 64 MB transfer per Buffer Descriptor (BD)
  • Optional AXI4-Stream Control and Status Streams

Features supported in the driver

  • Primary AXI4 Memory Map and AXI4-Stream data width support of 32, 64, 128, 256, 512, and 1024 bits
  • 64-bit Addressing Support
  • Scatter-Gather DMA mode on all supported 16 S2MM and MM2S channels.
  • 64 MB transfer per Buffer Descriptor (BD)


Missing Features and Known Issues/Limitations in Driver

AXI DMA

  • No support for Keyhole feature

AXI CDMA

  • None

AXI VDMA

  • Configurable Burst Size and Line Buffer depth for efficient video streaming

Kernel Configuration

The following config options should be enabled in order to build SoftIP DMA'S(AXI DMA/CDMA/VDMA/MCMDA) driver
CONFIG_DMADEVICES
CONFIG_XILINX_DMA




The driver is available at,
https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/xilinx/xilinx_dma.c

Devicetree

The device tree node for AXI DMA/CDMA/MCMDA/VDMA will be automatically generated, if the core is configured in the HW design, using the Device Tree BSP.

Steps to generate device-tree is documented here,
http://www.wiki.xilinx.com/Build+Device+Tree+Blob

And a sample binding is shown below and the description of DT property is documented here

AXI DMA
 axi_dma_1: dma@40400000 {
                        #dma-cells = <1>;
                        clock-names = "s_axi_lite_aclk", "m_axi_sg_aclk", "m_axi_mm2s_aclk", "m_axi_s2mm_aclk";
                        clocks = <&clkc 15>, <&clkc 15>, <&clkc 15>, <&clkc 15>;
                        compatible = "xlnx,axi-dma-1.00.a";
                        interrupt-parent = <&intc>;
                        interrupts = <0 29 4 0 30 4>;
                        reg = <0x40400000 0x10000>;
                        xlnx,addrwidth = <0x20>;
                        xlnx,include-sg ;
                        dma-channel@40400000 {
                                compatible = "xlnx,axi-dma-mm2s-channel";
                                dma-channels = <0x1>;
                                interrupts = <0 29 4>;
                                xlnx,datawidth = <0x20>;
                                xlnx,device-id = <0x0>;
                                xlnx,include-dre ;
                        };
                        dma-channel@40400030 {
                                compatible = "xlnx,axi-dma-s2mm-channel";
                                dma-channels = <0x1>;
                                interrupts = <0 30 4>;
                                xlnx,datawidth = <0x20>;
                                xlnx,device-id = <0x0>;
                                xlnx,include-dre ;
                        };
                };
 


AXI CDMA
axi_cdma_0: dma@4e200000 {
                        #dma-cells = <1>;
                        clock-names = "s_axi_lite_aclk", "m_axi_aclk";
                        clocks = <&&clkc 15>, <&&clkc 15>;
                        compatible = "xlnx,axi-cdma-1.00.a";
                        interrupt-parent = <&&intc>;
                        interrupts = <0 31 4>;
                        reg = <0x4e200000 0x10000>;
                        xlnx,addrwidth = <0x20>;
                        xlnx,include-sg ;
                        dma-channel@4e200000 {
                                compatible = "xlnx,axi-cdma-channel";
                                interrupts = <0 31 4>;
                                xlnx,datawidth = <0x20>;
                                xlnx,device-id = <0x0>;
                                xlnx,include-dre ;
                                xlnx,max-burst-len = <0x10>;
                        };
                };
 

AXI VDMA
axi_vdma_0: dma@43000000 {
                        #dma-cells = <1>;
                        clock-names = "s_axi_lite_aclk", "m_axi_mm2s_aclk", "m_axi_mm2s_aclk", "m_axi_s2mm_aclk", "m_axi_s2mm_aclk";
                        clocks = <&clkc 15>, <&clkc 15>, <&clkc 15>, <&clkc 15>, <&clkc 15>;
                        compatible = "xlnx,axi-vdma-1.00.a";
                        interrupt-parent = <&intc>;
                        interrupts = <0 32 4 0 33 4>;
                        reg = <0x43000000 0x10000>;
                        xlnx,addrwidth = <0x20>;
                        xlnx,flush-fsync = <0x1>;
                        xlnx,num-fstores = <0x1>;
                        dma-channel@43000000 {
                                compatible = "xlnx,axi-vdma-mm2s-channel";
                                interrupts = <0 32 4>;
                                xlnx,datawidth = <0x20>;
                                xlnx,device-id = <0x0>;
                                xlnx,genlock-mode ;
                                xlnx,include-dre ;
                        };
                        dma-channel@43000030 {
                                compatible = "xlnx,axi-vdma-s2mm-channel";
                                interrupts = <0 33 4>;
                                xlnx,datawidth = <0x20>;
                                xlnx,device-id = <0x0>;
                                xlnx,genlock-mode ;
                                xlnx,include-dre ;
                        };
                };
 
AXI MCDMA
axi_mcdma_0: axi_mcdma@a4040000 {
			#dma-cells = <1>;
			clock-names = "s_axi_aclk", "s_axi_lite_aclk";
			clocks = <&misc_clk_0>, <&misc_clk_0>;
			compatible = "xlnx,axi-mcdma-1.1", "xlnx,axi-mcdma-1.00.a";
			interrupt-names = "mm2s_ch1_introut", "mm2s_ch2_introut", "mm2s_ch3_introut", "mm2s_ch4_introut", "mm2s_ch5_introut", "mm2s_ch6_introut", "mm2s_ch7_introut",     		"mm2s_ch8_introut", "s2mm_ch1_introut", "s2mm_ch2_introut", "s2mm_ch3_introut", "s2mm_ch4_introut", "s2mm_ch5_introut", "s2mm_ch6_introut", "s2mm_ch7_introut", "s2mm_ch8_introut";
			interrupt-parent = <&gic>;
			interrupts = <0 84 4 0 84 4 0 84 4 0 84 4 0 84 4 0 84 4 0 84 4 0 84 4 0 85 4 0 85 4 0 85 4 0 85 4 0 85 4 0 85 4 0 85 4 0 85 4>;
			reg = <0x0 0xa4040000 0x0 0x10000>;
			xlnx,addrwidth = <0x28>;
			xlnx,dlytmr-resolution = <0x7d>;
			xlnx,enable-single-intr = <0x0>;
			xlnx,group1-mm2s = <0xa98ac7>;
			xlnx,group1-s2mm = <0xa98ac7>;
			xlnx,group2-mm2s = <0x0>;
			xlnx,group2-s2mm = <0x0>;
			xlnx,group3-mm2s = <0x0>;
			xlnx,group3-s2mm = <0x0>;
			xlnx,group4-mm2s = <0x0>;
			xlnx,group4-s2mm = <0x0>;
			xlnx,group5-mm2s = <0x0>;
			xlnx,group5-s2mm = <0x0>;
			xlnx,group6-mm2s = <0x0>;
			xlnx,group6-s2mm = <0x0>;
			xlnx,include-mm2s = <0x1>;
			xlnx,include-mm2s-dre = <0x1>;
			xlnx,include-mm2s-sf = <0x1>;
			xlnx,include-s2mm = <0x1>;
			xlnx,include-s2mm-dre = <0x1>;
			xlnx,include-s2mm-sf = <0x1>;
			xlnx,include-sg ;
			xlnx,mm2s-burst-size = <0x40>;
			xlnx,mm2s-scheduler = <0x2>;
			xlnx,num-mm2s-channels = <0x8>;
			xlnx,num-s2mm-channels = <0x8>;
			xlnx,prmry-is-aclk-async = <0x0>;
			xlnx,s2mm-burst-size = <0x40>;
			xlnx,sg-include-stscntrl-strm = <0x0>;
			xlnx,sg-length-width = <0xe>;
			xlnx,sg-use-stsapp-length = <0x0>;
			dma-channel@a4040000 {
				compatible = "xlnx,axi-dma-mm2s-channel";
				dma-channels = <0x8>;
				interrupt-parent = <&gic>;
				interrupts = <0 84 4 0 84 4 0 84 4 0 84 4 0 84 4 0 84 4 0 84 4 0 84 4>;
				xlnx,datawidth = <0x40>;
				xlnx,device-id = <0x1>;
				xlnx,include-dre ;
			};
			dma-channel@a4040030 {
				compatible = "xlnx,axi-dma-s2mm-channel";
				dma-channels = <0x8>;
				interrupt-parent = <&gic>;
				interrupts = <0 85 4 0 85 4 0 85 4 0 85 4 0 85 4 0 85 4 0 85 4 0 85 4>;
				xlnx,datawidth = <0x40>;
				xlnx,device-id = <0x1>;
				xlnx,include-dre ;
			};
};

Test Procedure

AXI DMA and MCDMA

A separate test case is provided to test the functionality of IP which assumes the IP streaming interfaces are connected back-to-back in the HW design.
The test client is designed to transfer the data on the streaming interface (MM2S) and compares the data received on another interface (S2MM).
This test client is available in the Linux source at, https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/xilinx/axidmatest.c

NOTE: In ZynqMP vivado design it is mandatory to enable high address=1 (Zynq Ultrascale+ MPSoC customization-> PS-PL configuration-> Address Fragmentation-> High Address)
and set AXI DMA/AXI MCMDA address width=40 bits. For detail please refer: http://www.wiki.xilinx.com/PL+Masters

The test client can be configured as loadable or in-built kernel module.



Device-tree Node for the axidma test client
 axidmatest_1: axidmatest@1 {
                      compatible ="xlnx,axi-dma-test-1.00.a";
                      dmas = <&axi_dma_1 0
                              &axi_dma_1 1>;
                      dma-names = "axidma0", "axidma1";
 } ;
Device-tree Node for the aximcdma test client
 axidmatest_1: axidmatest@1 {
                      compatible ="xlnx,axi-dma-test-1.00.a";
                      dmas = <&axi_mcdma_0 0
                              &axi_mcdma_0 16>;
                      dma-names = "axidma0", "axidma1";
 } ;
NOTE:  For MCMDA, MM2S channel(write/tx) ID start from '0' and is in [0-15] range. S2MM channel(read/rx) ID start from '16' and is in [16-31] range. These channels ID are fixed irrespective of IP configuration.
Running the test client will display the message when the test is successful,
dmatest: Started 1 threads using dma0chan0 dma0chan1
dma0chan0-dma0c: terminating after 5 tests, 0 failures (status 0)

AXI CDMA

Generic kernel dmatest client is used to test the functionality of IP which reads the data from one location of memory and compare the data after copying data to other location of memory.
This test client is available in the Linux source at, https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/dmatest.c

The test client can be configured as loadable or in-built kernel module.

Running the dmatest client will display the message when the test is successful,
echo 1 > /sys/module/dmatest/parameters/verbose
echo dma1chan0 > /sys/module/dmatest/parameters/channel
echo 2000 > /sys/module/dmatest/parameters/timeout
echo 1 > /sys/module/dmatest/parameters/iterations
echo 1 > /sys/module/dmatest/parameters/run
    
[ 359.611486] dmatest: Started 1 threads using dma1chan0
[ 359.617245] dmatest: dma1chan0-copy0: result #1: 'test passed' with src_off=0x830 dst_off=0x368 len=0x3560 (0)
[ 359.629924] dmatest: dma1chan0-copy0: summary 1 tests, 0 failures 77 iops 1002 KB/s (0)

AXI VDMA

A separate test case is provided to test the functionality of IP which assumes the IP streaming interfaces are connected back-to-back in the HW design.
The test client is designed to transfer the data on the streaming interface (MM2S) and compares the data received on another interface (S2MM).
This test client is available in the Linux source at, https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/xilinx/vdmatest.c

The test client can be configured as loadable or in-built kernel module.



Device-tree Node for the test client
vdmatest_1: vdmatest@1 {
               compatible ="xlnx,axi-vdma-test-1.00.a";
               xlnx,num-fstores = <0x3>;
               dmas = <&axi_vdma_0 0
                       &axi_vdma_0 1>;
               dma-names = "vdma0", "vdma1";
 } ;

Running the test client will display the message when the test is successful,
vdmatest: Started 1 threads using dma0chan0 dma0chan1
dma0chan0-dma0c: terminating after 1 tests, 0 failures (status 0)

Mainline Status

The current driver available in the Xilinx Linux git is in sync with the open source kernel driver except for the following
  • DMA Client driver (axidmatest and vdmatest - these are xilinx specific dma client driver and not streamable)

Change Log

2023.2

  • Mainline fix pulled in to handle child node error path

https://github.com/Xilinx/linux-xlnx/commits/xilinx-v2023.2/drivers/dma/xilinx/xilinx_dma.c


2023.1

  • Error path handling and kernel doc fix

https://github.com/Xilinx/linux-xlnx/commits/xilinx-v2023.1/drivers/dma/xilinx/xilinx_dma.c

2022.2

No changes

2022.1

  • Bugfix for IRQ mapping errors to allow deferred probe.

https://github.com/Xilinx/linux-xlnx/commits/master/drivers/dma/xilinx/xilinx_dma.c

2021.2

  • Update DMA mask for high memory access.
  • Documentation and smatch warning fixes.

https://github.com/Xilinx/linux-xlnx/commits/xlnx_rebase_v5.10_2021.2/drivers/dma/xilinx/xilinx_dma.c

2021.1

  • 5.10 kernel rebase
  • Trivial coverity fixes.

cf3db08cef1b dmaengine: xilinx_dma: Typecast with enum to fix the coverity warning
e7007bc989ff dmaengine: xilinx_dma: Modify variable type to fix the incompatible warning
c1d6cd7cde88 dmaengine: xilinx_dma: Add condition to check return value


2020.2

  • MCDMA fixes (SG capability, usage of xilinx_aximcdma_tx_segment) 
  • Add missing check for empty list in xilinx_dma_tx_status
  • use readl_poll_timeout_atomic variant

7a34a475cf62 dmaengine: xilinx_dma: Fix SG capability check for MCDMA
a44799713f85 dmaengine: xilinx_dma: Fix usage of xilinx_aximcdma_tx_segment
2aefe65f2861 dmaengine: xilinx_dma: Add missing check for empty list
d2798ec3512e dmaengine: xilinx_dma: use readl_poll_timeout_atomic variant

2020.1

  • Fix dma channel node order dependency.
  • MCDMA IP support.
  • 5.4 kernel upgrade (Merge tag 'dmaengine-5.3-rc1' of slave tree + xilinx tree rebase patches)

Commits:

b6848e6 dmaengine: xilinx_dma: In dma channel probe fix node order dependency
87e34b2 dmaengine: xilinx_dma: Extend dma_config structure to store max channel count
9fef941 dmaengine: xilinx_dma: Add Xilinx AXI MCDMA Engine driver support
<snip>
47ebe00 Merge tag 'dmaengine-5.3-rc1' of slave tree

2019.2

  • Fix SG internal error in cdma prep_dma_sg mode.
  • Clear desc_pendingcount in xilinx_dma_reset.
  • Check for both idle and halted state in axidma stop.
  • Residue calculation and reporting.
  • Remove cdmatest client. Now onwards we have to use generic dmatest client for cdma validation.

Commits:

552d3f1 dmaengine: xilinx_dma: Fix SG internal error in cdma prep_dma_sg mode
e4a9ef8 dmaengine: xilinx: Clear desc_pendingcount in xilinx_dma_reset
136cd70 dmaengine: xilinx: Check for both idle and halted state in axidma stop_transfer
478500b dmaengine: xilinx: Print debug message when no free tx segments
8eab5a1 dmaengine: xilinx: Remove residue from channel data
0f7b82f dmaengine: xilinx: Add callback_result support
bc6a6ab dmaengine: xilinx: Introduce xilinx_dma_get_residue
41176b9 dmaengine: xilinx: Merge get_callback and _invoke
976bab6 dmaengine: xilinx_dma: Remove desc_callback_valid check

2019.1

  • Remove axidma multi-channel mode support
  • Fix 64-bit simple AXIDMA transfer
  • Fix control reg update in vdma_channel_set_config 

Commits:

8c8e3b1 dmaengine: xilinx_dma: Remove axidma multi-channel mode support
c3b6c45 dmaengine: xilinx_dma: Fix 64-bit simple AXIDMA transfer
965442b dmaengine: xilinx_dma: Introduce helper macro for preparing dma address
fbde9af dmaengine: xilinx_dma: Fix control reg update in vdma_channel_set_config 

2018.3

  • Reset DMA channel in dma_terminate_all.
  • Fix 64-bit simple CDMA transfer.
  • Code refactoring.

Commits:

1c8b3af dmaengine: xilinx_dma: Reset DMA channel in dma_terminate_all
cf9dfe6 dmaengine: xilinx_dma: Minor refactoring
44b796e dmaengine: xilinx_dma: Fix 64-bit simple CDMA transfer
113e03d dmaengine: xilinx_dma: Move enum xdma_ip_type to driver file
55ea663 dmaengine: xilinx_dma: Fix typos

2018.2
Summary:
  • Add support for 64MB data transfer.
Commits:
f479cb5 dmaengine: xilinx: dma: In axidma add support for 64MB data transfer
2018.1
Summary:
  • Upgrade to 4.14 kernel.
  • Trivial code cleanup i.e Refactor axidma channel allocation
  • Free BD consistent memory in channel free_chan_resources.
  • Fix DMA idle state on terminate_all.
  • Enable VDMA S2MM vertical flip support.
  • Add support for memory sg transactions for CDMA.
  • In AXIDMA program hardware supported buffer length.
Commits:
818f168 Merge tag 'v4.14' into master
62515d5 dma: xilinx: xilinx_dma: Refactor axidma channel allocation
b0d0ec6 dma: xilinx: xilinx_dma: Free BD consistent memory
a9aeecb dma: xilinx: making dma state as idle on terminating all
1eb7c59 dmaengine: xilinx: dma: Enable VDMA S2MM vertical flip support
d5b6e8d dma: xilinx: xilinx_dma: Move open brace '{' to function definition next line
2eee108 dma: xilinx: xilinx_dma: Document functions return value
ff238b0 dma: xilinx: Add support for memory sg transactions for cdma
86b2c03 vdmatest: xilinx: Add hsize and vsize module parameter
b78597b vdmatest: xilinx: Fix VDMA hang reported in certain resolutions
01a61a2 vdmatest: xilinx: Use octal permissions '0444'
dcee02c dmaengine: xilinx: dma: Program hardware supported buffer length

2017.4
Summary:
  • Added support for memory sg transactions for cdma
  • Fixed race conditions in the driver for cdma
  • Differentiate probe based on IP type.
  • Fix compiler warning.
Commits:
9e8f5fc dma: xilinx: Add support for memory sg transactions for cdma
b3fe111 dma: xilinx: Fix race conditions in the driver for cdma.
61a18fd dma: xilinx: Differentiate probe based on the IP type.
322bd63dma: xilinx: xilinx_dma: Fix compilation warning.

2017.3
Summary:
  • Fix issues with dma_get_slave_caps API for AXI DMA configuration.
  • Fix issues with vdma mulit fstore configuration.
Commits:
ed2ee32 dma: xilinx: Fix issues with vdma mulit fstore configuration
54c8b75dma: xilinx: Fix dma_get_slave_caps gaps

2017.2
  • None

2017.1
Summary:
  • Add idle checks across the driver for all the DMA's (AXI DMA/CDMA/VDMA) before submitting the descriptor.
  • Fix bug in multiple frame stores scenario in vdma
  • Fix race condition in the driver for multiple descriptor scenario for axidma.
Commits:
d4df1d5 dma: xilinx_dma: check for channel idle state before submitting the dma descriptor.
05ce73d dma: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
3794829 dma: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario for axidma.

2016.4
  • None

2016.3
Summary:
  • Mainlined the driver
  • Fixed the issues as per the commit ID
  • Deleted the AXI DMA/CDMA driver and Merged the AXI DMA/CDMA code with the VDMA driver
  • Merged all the 3 DMA's drivers into a single driver
Commits:
f4cd973 dma: xilinx: axidma: Fix race condition in the cyclic dma mode
853502d vdma: sync driver with mainline
52619f dma: xilinx: Delete AXI DMA driver
97833b1 dma: xilinx: Delete AXI CDMA driver
d78c414 dmaengine: vdma: Use dma_pool_zalloc
7531bdc dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dma
0cc811a dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine
90b6146 dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Acc… …ess Engine
300f90b dmaengine: vdma: Add config structure to differentiate dmas
cc28fc1 dmaengine: vdma: Add clock support
0717493 dmaengine: vdma: don't crash when bad channel is requested
60c30ad dmaengine: vdma: Add support for cyclic dma mode
d0509b1 dmaengine: vdma: Use dma_pool_zalloc
d7cb73e dmaengine: vdma: Fix compilation warning in cyclic dma mode
18ef650 dmaengine: vdma: Add 64 bit addressing support for the axi dma
ab182b3 dmaengine: vdma: Add 64 bit addressing support for the axi cdma
aa32340 dmaengine: vdma: Add support for mulit-channel dma mode
fb5fb40 dmaengine: xilinx: Rename driver and config
c41a863 dmaengine: xilinx: Use different channel names for each dma
ebccb5e dmaengine: xilinx: Fix race condition in axi dma cyclic dma mode
6300822 dma: xilinx: Update test clients depends config option
8408c14 dma: xilinx: Check for channel idle state before submitting dma descr…

Related Links

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy