Invited: Algorithm-Software-Hardware Co-Design for Deep Learning Acceleration | IEEE Conference Publication | IEEE Xplore

Invited: Algorithm-Software-Hardware Co-Design for Deep Learning Acceleration


Abstract:

With the development of AI techniques, it is appealing but challenging to efficiently deploy deep neural networks on resource-constrained devices. This paper presents two...Show More

Abstract:

With the development of AI techniques, it is appealing but challenging to efficiently deploy deep neural networks on resource-constrained devices. This paper presents two novel algorithm-software-hardware co-designs for improving the performance of deep neural networks. The first part introduces a hardware-efficient adaptive token pruning framework for Vision Transformers (ViTs) on FPGA, which achieves significant speedup under similar model accuracy. The second part introduces a design automation flow for crossbar-based Binary Neural Network (BNN) accelerators using the emerging technique Adiabatic Quantum-Flux-Parametron (AQFP). The proposed method significantly improves energy efficiency by combining AQFP with BNN together, which achieves over 100× better energy efficiency compared with the previous representative AQFP-based framework. Both proposed designs demonstrate superior performance compared to existing methods.
Date of Conference: 09-13 July 2023
Date Added to IEEE Xplore: 15 September 2023
ISBN Information:
Conference Location: San Francisco, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.