Known Issues
This page contains known issues and bugs with QEMU, and their solutions.
First stage bootloader (FSBL) hangs on QEMU
There are a few situations where the FSBL can hang in QEMU.
One is when initializing the DDR controller in xfsbl_initialization.c
.
The DDR controller is not fully modeled in QEMU, so the FSBL will hang when the DDR controller does not behave as expected.
Another situation is because the FSBL uses psu_init.c,
which is dynamically generated code that is changed according to the design.psu_init
functions generally make clock configurations for the SOC, which QEMU does not emulate.
Due to such missing emulation, sometimes psu_init
calls may hang during FSBL boot.
For more information on building and customizing the FSBL, visit the Zynq UltraScale+ FSBL page.
Solution
Build a customized FSBL by commenting out the functions that cause the hangs in psu_init.c
or xfsbl_initialization.c
.
For PetaLinux 2018.3:
For PetaLinux 2019.1 and later:
Unable to see ARM-R5 CPUs on Zynq UltraScale+ MPSoC and Versal Adaptive SoC platforms with XSDB on 2020.1 QEMU
2020.1 QEMU does not give processor information to XSDB, so XSDB does not know that these platforms have R5s on them.
Solution
Use the directions and patches found on this page to patch QEMU and XSDB.
TFTP Put Fails on QEMU
The TFTP put command is not supported in mainline or Xilinx QEMU for security reasons.
Solution
Use SCP or another protocol.
AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75613
When using XSDB, my watchpoint was hit, but XSDB doesn't say so and my program is stopped
Solution
Delete or disable the watchpoint that was hit, and then unlock the CPUs by using the con
command.
If you're not sure which watchpoint was hit, delete or disable all of them.
AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75621
When using XSDB, my program is stopped in QEMU, but XSDB says my CPUs are running
Solution
This only happens when using watchpoints. If this does happen, Exit QEMU by doing CTRL+A X
, and then restart it.
To avoid this from happening, avoid using watchpoints when debugging with XSDB.
AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75614
When using a GDB remote connection to debug my program on QEMU, my program segfaults and GDB does not catch it
QEMU's GDB server does not support catching the SIGSEGV signal at this time.
The GDB server can only catch SIGINT and SIGTRAP.
Solution
If possible, run GDB on the QEMU guest and debug your application using GDB on the guest.
AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75615
Incorrect dummy cycle count with Quad IO Read (0xEB) command with Micron Flashes
On hardware, Micron flashes expect 10 dummy cycles for a Quad IO Read (QIOR) command, but QEMU only expects 8.
Solution
Use 8 dummy cycles for a QIOR command on Micron flashes instead of 10.
AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75600
Incorrect dummy cycles sent when using GQSPI dual or quad mode byte transfer instead of CS hold time
When using GQSPI, it is possible to use a byte transfer to send dummy cycles instead of using CS hold time.
QEMU does not emulate link state for GQSPI commands, so if sending 1 byte using quad or dual mode, QEMU will send 1 cycle to flash instead of 2 or 4 respectively.
Solution
Use CS hold time for the amount of dummy cycles you need for your command instead of transferring bytes.
AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75599
Versal Adaptive SoC LPD XPPU Is not controlling APU accesses to TCM
On hardware, LPD XPPU can control accesses to TCM, however this behavior is not implemented in QEMU.
This is due to how the XPPU is implemented in QEMU, and the possibility of LPD XPPU blocking APU and RPU accesses to TCM.
Solution
There is no known workaround for this at the moment.
AR link: https://xilinx.sharepoint.com/sites/XKB/SitePages/Articleviewer.aspx?ArticleNumber=75684
© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy