Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.



This page provides an introduction to the "Accelerated Image Classification via Binary Neural Network" (short AIC) design example.
This design example demonstrates how moving software implemented neural networks can be dramatically accelerated via Programmable Logic. In this design a Binary Neural Network (BNN) is implemented. Depending on silicon platform an acceleration of 6,000 to 8,000 times is demonstrated. Via the graphical user interface the user can see metrics, images and classification results.

The work is based on top of the work of the Xilinx Research Lab. More information can be found here:
Inference of quantized neural networks on heterogeneous all-programmable devices (DATE 2018)
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference (FPGA 2017)
Scaling Binarized Neural Networks on Reconfigurable Logic (PARMA-DITAM 2017)
Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic (ICCD 2017)

You can also checkout following repositories:
https://github.com/Xilinx/BNN-PYNQ
https://github.com/Xilinx/QNN-MO-PYNQ


For any questions please contact Missing Link Electronics (MLE).



The AIC Demo is available for following Platforms:

BoardDeviceRevision
ZCU102XCZU9EGRev D2, Rev 1.0, Rev 1.1
Ultra96XCZU3EGAllV1


Document History

Date Version Author Description of Revisions
2018-03-26V0.1Andreas Schuler (MLE)Initial Document
2018-04-30V1.0Andreas SchulerSchuler (MLE)first release
2018-05-03V1.1Andreas SchulerSchuler (MLE)

Add reference to Xilinx Research Lab

2018-12-14V1.1Andreas SchulerSchuler (MLE)Update Document to latest changes

...