Loading [a11y]/accessibility-menu.js
PBN: Progressive Batch Normalization for DNN Training on Edge Device | IEEE Conference Publication | IEEE Xplore

PBN: Progressive Batch Normalization for DNN Training on Edge Device


Abstract:

Batch normalization (BN) plays a critical role in training deep neural networks (DNNs) on energy-limited edge devices since it accelerates the convergence of DNN training...Show More

Abstract:

Batch normalization (BN) plays a critical role in training deep neural networks (DNNs) on energy-limited edge devices since it accelerates the convergence of DNN training. However, the statistical operations and data dependencies within BN introduce challenges to efficient BN hardware design, such as complex computation and repeated data accesses. This paper presents PBN, a progressive batch normalization approach that can decouple the statistical calculation and normalization process within BN to address the above challenges. PBN exhibits considerable accuracy and convergence speed when evaluated using classical DNN models and datasets. Furthermore, a PBN hardware module that supports both forward propagation and backward propagation in DNN training is designed and implemented on Xilinx ZCU102 field-programmable gate array (FPGA). Experimental results indicate that PBN reduces external memory access (EMA) by an average of 80%, while achieving a 2.85× speedup compared with conventional BN.
Date of Conference: 19-22 May 2024
Date Added to IEEE Xplore: 02 July 2024
ISBN Information:

ISSN Information:

Conference Location: Singapore, Singapore

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.