Zynq Linux
The page discusses general information related to running Linux on the Zynq-7000 SoC.
Table of Contents
Introduction
Many pieces come together to boot Linux successfully on Zynq. Information about the Zynq Processing System (PS), the design and layout of your board and the function of your custom hardware must be combined in order for your product to function properly. This section will help you understand the pieces and how they fit together.
The Zynq boot process begins with running code inside the Boot ROM. The boot ROM manages the early boot process by selecting the boot medium and quickly loading the First Stage Boot Loader (FSBL). The FSBL does important early system initialization, configuring the DDR trace lengths of the PCB, setting the PLL coefficients and many others. The FSBL is created by Xilinx tools using information from your hardware project.
The FSBL is the stepping point between Xilinx's code and yours. What happens next depends on what type of software your system needs. If your device runs Linux it's very likely that your next stop is loading U-boot.
The Zynq boot process begins with running code inside the Boot ROM. The boot ROM manages the early boot process by selecting the boot medium and quickly loading the First Stage Boot Loader (FSBL). The FSBL does important early system initialization, configuring the DDR trace lengths of the PCB, setting the PLL coefficients and many others. The FSBL is created by Xilinx tools using information from your hardware project.
The FSBL is the stepping point between Xilinx's code and yours. What happens next depends on what type of software your system needs. If your device runs Linux it's very likely that your next stop is loading U-boot.
Complete information on the booting of Zynq can be found in the Technical Reference Manual.
The Xilinx Zynq Linux kernel is based on the Linux kernel from kernel.org together with Xilinx additions (BSP and drivers). It is typically updated to stay close to the latest version from kernel.org on a regular basis. In general, the Xilinx Linux kernel for Zynq follows normal ARM Linux processes for building and running.
Xilinx Zynq Linux Strategy
Xilinx is striving to push code from the Xilinx GIT server to the open source community (kernel.org, u-boot, etc.). Pushing the code takes time such that the Xilinx GIT servers contain the latest code base and is recommended for customer use. A minimal BSP for Zynq has been pushed to the mainline (kernel.org) and is available there. At this time, Xilinx only supports Linux from the Xilinx GIT server.Xilinx Zynq Linux Support
Xilinx Zynq Linux is based on open source software (the kernel from kernel.org). Xilinx provides support for Xilinx specific parts of the Linux kernel (drivers and BSP). Xilinx also supports Linux thru the Embedded Linux forum on http://forums.xilinx.com. As with many open source projects, Xilinx also expects customers to also use the open source mailing lists for Linux in areas that are not specific to Xilinx Zynq.Using a Pre-Built Image/Release
Xilinx provides pre-built linux releases which can be used in place of building the kernel and creating a boot image. These can be found here: Zynq ReleasesZynq Linux Kernel
Assumptions
Building the Linux kernel is only supported from a Linux based host. It is not supported from Windows.The details of steps to building the kernel and boot image are described inGetting Started
Kernel Details
The Board Support Package (BSP)
The primary code for the platform is in arch/arm/mach-zynq directory of the kernel tree. The BSP contains some drivers and utilizes some existing drivers from arch/arm.Hardware Block | Driver Name | Notes |
GIC | gic.c | in arch/arm/common |
PL330 | pl330.c | a different driver now exists in kernel.org |
PS2 | ps2.c | support for keyboard and mouse in QEMU |
SCU Global Timer | scu_gtimer.c | provides primitive abilities only |
Device Drivers
The Xilinx Linux kernel includes drivers for each hardware block. The Linux Drivers page provides more details .Device Tree
Device tree is a process by which the Linux kernel initializes itself based on the hardware platform. Device tree allows a single kernel image to run on multiple hardware platforms. A device tree file, named *.dts, is a text file that describes the hardware platform. It is compiled into a device tree blob, *.dtb, which is loaded into memory before the Linux kernel is started. The Linux kernel then uses that device tree blob to initialize itself at runtime.The process to create a device tree source (.dts) file, and compile a device tree blob (.dtb) from the DTS is described in Build Device Tree Blob.
Device Tree In Memory
The industry standard method is that it expects the address of the device tree to be in register R2 when the boots. Starting in 14.3 this is the default kernel configuration, but it was not previously.Device Tree Generator
Xilinx SDK supports the Device Tree Generator for Zynq. For Linux kernel 3.3 and later, the device tree syntax has changed for denoting interrupts. To calculate the correct value in these releases, use Table 7-3 in the Zynq-7000 AP SoC TRM to locate the correct SPI ID# for the desired peripheral. Then, subtract 32 from this value. The result is what should be entered into the device tree interrupt field.Example: From Table 7-3, UART 1 has a value of "82" for the SPI ID#. Subtracting 32 from this yields "50." "50" is the value found in the interrupt field for the UART 1 peripheral in the device tree.
Kernel Command Line
The kernel has 2 ways for it to get the command line. The default kernel configuration causes the kernel to get the command line from the device tree.There is a kernel configuration option that allows the command line to always be used from the kernel configuration ignoring the command line string in the device tree. From the main menu of the kernel configuration, select Boot Options, followed by Always use the default kernel command string.
SMP
By default the kernel is configured to use both CPUs in SMP mode. The following command shows the CPU information and should indicate two CPUs, numbered 0 and 1.By default, the kernel handles all interrupts on the 1st CPU. They can be pushed to the 2nd CPU and there are user space daemons that will do interrupt load balancing.
zynq> cat /proc/cpuinfo
Using A Single CPU
A user may cause it to use only one CPU by adding "maxcpus=1" to the kernel command line in the device tree.Specifying A CPU
A user may cause an application to run on a specific CPU using the BusyBox utility taskset.The following command will display the help information for the taskset command.
zynq> taskset --help
The following command starts the application top running on the 2nd CPU.
zynq> taskset 2 top
Ramdisk Image and Pre-built images
Build and Modify a Rootfs details the process to build a RAM disk image for Zynq, and includes pre-built images.
Prebuilt Ramdisk Contents
The ramdisk image incorporates the following features.
Application Name | Source | Details |
Telnet | BusyBox | there is no login, a shell is started |
SSH | Dropbear | user: root, password: root |
Web server | BusyBox | |
FTP | BusyBox | |
GDB Server | GNU Tools |
© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy