Known Issues

This page contains known issues and bugs with QEMU, and their solutions.



First stage bootloader (FSBL) hangs on QEMU

There are a few situations where the FSBL can hang in QEMU.

One is when initializing the DDR controller in xfsbl_initialization.c.
The DDR controller is not fully modeled in QEMU, so the FSBL will hang when the DDR controller does not behave as expected.

Another situation is because the FSBL uses psu_init.c, which is dynamically generated code that is changed according to the design.
psu_init functions generally make clock configurations for the SOC, which QEMU does not emulate.
Due to such missing emulation, sometimes psu_init calls may hang during FSBL boot.

For more information on building and customizing the FSBL, visit the Zynq UltraScale+ FSBL page.

Solution

Build a customized FSBL by commenting out the functions that cause the hangs in psu_init.c or xfsbl_initialization.c.

For PetaLinux 2018.3:

psu_init.c
unsigned long psu_ddr_phybringup_data(void)
{
 
 
    unsigned int regval = 0;
 
    unsigned int pll_retry = 10;
 
    unsigned int pll_locked = 0;
 
 
    while ((pll_retry > 0) && (!pll_locked)) {
 
        Xil_Out32(0xFD080004, 0x00040010);/*PIR*/
        Xil_Out32(0xFD080004, 0x00040011);/*PIR*/
 
    while ((Xil_In32(0xFD080030) & 0x1) != 1) {
    /*****TODO*****/
 
    /*TIMEOUT poll mechanism need to be inserted in this block*/
 
    }
 
 
        pll_locked = (Xil_In32(0xFD080030) & 0x80000000)
        >> 31;/*PGSR0*/
        //pll_locked &= (Xil_In32(0xFD0807E0) & 0x10000)
        //>> 16;/*DX0GSR0*/
        //pll_locked &= (Xil_In32(0xFD0809E0) & 0x10000)
        //>> 16;/*DX2GSR0*/
        //pll_locked &= (Xil_In32(0xFD080BE0) & 0x10000)
        //>> 16;/*DX4GSR0*/
        //pll_locked &= (Xil_In32(0xFD080DE0) & 0x10000)
        //>> 16;/*DX6GSR0*/
        pll_retry--;
    }

For PetaLinux 2019.1 and later:

xfsbl_initialization.c
#ifdef XFSBL_PS_DDR
#ifdef XPAR_DYNAMIC_DDR_ENABLED
	/*
	 * This function is used for all the ZynqMP boards.
	 * This function initialize the DDR by fetching the SPD data from
	 * EEPROM. This function will determine the type of the DDR and decode
	 * the SPD structure accordingly. The SPD data is used to calculate the
	 * register values of DDR controller and DDR PHY.
	 */
//	Status = XFsbl_DdrInit();
//	if (XFSBL_SUCCESS != Status) {
//		XFsbl_Printf(DEBUG_GENERAL,"XFSBL_DDR_INIT_FAILED\n\r");
//		goto END;
//	}
#endif
#endif

Unable to see ARM-R5 CPUs on Zynq UltraScale+ MPSoC and Versal Adaptive SoC platforms with XSDB on 2020.1 QEMU

2020.1 QEMU does not give processor information to XSDB, so XSDB does not know that these platforms have R5s on them.

Solution

Use the directions and patches found on this page to patch QEMU and XSDB.

TFTP Put Fails on QEMU

The TFTP put command is not supported in mainline or Xilinx QEMU for security reasons.

Solution

Use SCP or another protocol.

AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75613

When using XSDB, my watchpoint was hit, but XSDB doesn't say so and my program is stopped

Solution

Delete or disable the watchpoint that was hit, and then unlock the CPUs by using the con command.
If you're not sure which watchpoint was hit, delete or disable all of them.

AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75621

When using XSDB, my program is stopped in QEMU, but XSDB says my CPUs are running

Solution

This only happens when using watchpoints.  If this does happen,  Exit QEMU by doing CTRL+A X, and then restart it.
To avoid this from happening, avoid using watchpoints when debugging with XSDB.

AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75614

When using a GDB remote connection to debug my program on QEMU, my program segfaults and GDB does not catch it

QEMU's GDB server does not support catching the SIGSEGV signal at this time.
The GDB server can only catch SIGINT and SIGTRAP.

Solution

If possible, run GDB on the QEMU guest and debug your application using GDB on the guest.

AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75615

Incorrect dummy cycle count with Quad IO Read (0xEB) command with Micron Flashes

On hardware, Micron flashes expect 10 dummy cycles for a Quad IO Read (QIOR) command, but QEMU only expects 8.

Solution

Use 8 dummy cycles for a QIOR command on Micron flashes instead of 10.

AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75600

Incorrect dummy cycles sent when using GQSPI dual or quad mode byte transfer instead of CS hold time

When using GQSPI, it is possible to use a byte transfer to send dummy cycles instead of using CS hold time.

QEMU does not emulate link state for GQSPI commands, so if sending 1 byte using quad or dual mode, QEMU will send 1 cycle to flash instead of 2 or 4 respectively.

Solution

Use CS hold time for the amount of dummy cycles you need for your command instead of transferring bytes.

AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75599

Versal Adaptive SoC LPD XPPU Is not controlling APU accesses to TCM

On hardware, LPD XPPU can control accesses to TCM, however this behavior is not implemented in QEMU.
This is due to how the XPPU is implemented in QEMU, and the possibility of LPD XPPU blocking APU and RPU accesses to TCM.

Solution

There is no known workaround for this at the moment.

AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75684


© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy