Video Framebuffer Read
Table of Contents
OverviewVideo Framebuffer Read IP cores are designed for video applications requiring frame buffers and is designed for high-bandwidth access between the AXI4-Stream video interface and the AXI4-interface.
LocationThe driver is currently located in a special branch of the standard Xilinx Linux kernel: https://github.com/Xilinx/linux-xlnx/tree/2017.3_video_ea
Supported IP FeaturesThe following is a list of IP constraints for which there is support in the driver and for which verification within the context of the listed reference designs has been performed (see below):
1. Streaming Video Format Support: RGB, YUV 4:2:2, YUV 4:4:4, YUV 4:2:0
2. Memory Video Format Support: RGB8, BGRX8, RGBX8, YUYV8, YUVX8, RGBX10, YUVX10, Y_UV8, Y_UV8_420, UYVY8, YUV8, Y_UV10, Y_UV10_420, Y8, Y10
3. Programmable memory video format
4. Support for 8-bit or 10-bit per color component on stream or memory interface
5. Resolutions up to 3840x2160
Unsupported IP FeaturesThe following list of IP constraints either has no driver support or has not yet been verified to work in any existing technical reference design:
1. Resolutions up to 8192x4320
Known IssuesWhen DMA operations are initiated by a client, the hardware is placed into "autorestart" mode. When the last buffer has been returned to the client as "completed", if the client does not supply a new read buffer location or fails to halt the driver, then the last buffer location written to will continue to be utilized by the driver. In effect, the driver will "spin" on the last location programmed.
Kernel ConfigurationThe dirver must be enabled in the kernel by selecting option CONFIG_XILINX_FRMBUF
Device Tree ConfigurationComprehensive documentation regarding device tree configuration may be found: <linux_root>/Documentation/devicetree/bindings/dma/xilinx/xilinx_frmbuf.txt
Below is a device tree example for a Framebuffer Read instance configured with 32-bit wide DMA descriptors and support for RGB8 as well as RGBX8 memory formats:
Interfacing with the Video Framebuffer Driver from DMA ClientsThe Linux driver for Framebuffer Read implements the Linux DMA Engine interface semantics for a single channel DMA controller. Because the IP is video format aware, it has capabilities that are not fully served by the dma engine interface. As such, the Video Framebuffer driver exports an API interface that must be used by DMA clients in addition to the Linux DMA Engine interface for proper programming. (see <linux_root>/include/linux/dma/xilinx_frmbuf.h).
The general steps for preparing DMA to read to a specific memory buffer:
1. Using the Video Framebuffer API, configure the DMA device with the expected memory format for read
2. Prepare an interleaved template describing the buffer location (note: see section DMA Interleaved Template Requirements below for more details)
3. Pass the interleaved template to the DMA device using the Linux DMA Engine interface
4. With the DMA descriptor which is returned from step 3, add a callback and then submit to the DMA device via the DMA Engine interface
5. Start the DMA read operation
6. Terminate DMA read operation when frame processing deemed complete by client
DMA Interleaved Template RequirementsThe Video Framebuffer IP supports two dma address pointers for semi-planar formats: one for luma and one for chroma. As such, data for the two planes need to be strictly contiguous which permits for alignment of plane data within a larger buffer. However, all frame data (luma and chroma) must be contained within a contiguous frame buffer and luma plane data should be arranged to come before chroma data. Note that this is not a limitation imposed by the IP but by the driver at this moment. When preparing a struct dma_interleaved_template instance to describe a semi-planar format, the following members must be filled out as follows:
struct dma_interleaved_template:src_start = <physical address from which to start reading frame data (any offsets should be added to this value)>
src_sgl = true
dst_sgl = false
numf = <height of frame in pixels; height of luma frame for semi-planar formats>
frame_size = < 1 or 2 depending on whether this is describing a packed or semi-planar format>
sgl = <see struct data_chunk below>
struct data_chunk:sgl.size = <number of bytes devoted to image data for a row>
sgl.icg = < number of non-data bytes within a row of image data; padding>
sgl.src_sgl = <the offset in bytes between the end of luma frame data to the start of chroma plane data; only needed for semi-planar formats>
Below is a code example for semi-planar YUV 422 (i.e. NV16)
Driver OperationThe Framebuffer driver manages buffer descriptors in software keeping them in one of four possible states in the following order:
When a DMA client calls dma_commit(), the buffer descriptor is placed in the driver’s “pending” queue. Multiple buffers can be queued in this manner by the DMA client before proceeding to the next step (see step 4 of Interfacing with the Video Framebuffer Driver from DMA Clients). When dma_async_issue_pending() is called (step 5), the driver begins processing all queued buffers on the “pending” list. A buffer is plucked from the pending list and then stored as “staged”. At this moment, driver programs the registers with data provided within the “staged” buffer descriptor. During normal processing (i.e. all frames except the first frame*), these values will not become active until the currently processed frame completes. As such, there is a one-frame delay between programming and the actual writing data to memory. Hence the term “staged” to describe this part of the buffer lifecycle. When the currently active frame completed, the buffer descriptor is classified as “active” in the driver. At this point, a new descriptor is plucked from the pending list and this new buffer is marked as “staged” with its values programmed into the IP registers as described earlier. The buffer marked “active” represents the data currently being read from memory. Other than being held in the “active” state, no other action is taken with the buffer. When the active frame completes, it is moved to the “done” list. The driver utilizes a tasklet which is called at the end of the frame interrupt handler. The tasklet will process any buffer descriptors on the done list by removing them from the list and calling any callback the client has linked to the descriptor.
This completes the lifecycle of a buffer descriptor. As can be seen, with four possible states, it is best to allocate at least four buffers to maintain consistent frame processing. Fewer buffers will result in gaps within the pipeline and result in frame data within a given buffer being read one or more times (depending on how few buffers are queued and the number of resulting gaps in the driver’s buffer pipeline).
- Note: normally, registers programmed while the IP is running will not take effect until the next frame. The very first frame, however, is an exception: the IP is not yet running and, as such, the values take effect immediately. Nevertheless, there is no additional special treatment given the first frame buffer. As such, it will be read from, in effect, twice.
Test ApproachTesting the Framebuffer Read driver is best done when incorporated into a larger design designed for display output. It is best to reference the test procedure for the Video Mixer.
In particular, run test #6 (change output resolution).
Additionally, run modetest to change the output resolution with the -v argument which will result in page flipping on the primary plane
The output frequency reported should be approximately 1/4 that of the current refresh rate. This is because modetest only creates a single framebuffer and the Video Framebuffer driver requires four (4) buffers for optimal operation.