Loading web-font TeX/Main/Regular
Accelerating Convolutional Neural Network-Based Hyperspectral Image Classification by Step Activation Quantization | IEEE Journals & Magazine | IEEE Xplore

Accelerating Convolutional Neural Network-Based Hyperspectral Image Classification by Step Activation Quantization


Abstract:

Convolutional neural networks (CNNs) have achieved excellent feature extraction capabilities in remotely sensed hyperspectral image (HSI) classification. This is due to t...Show More

Abstract:

Convolutional neural networks (CNNs) have achieved excellent feature extraction capabilities in remotely sensed hyperspectral image (HSI) classification. This is due to their ability to learn representative spatial and spectral features. However, it is difficult for conventional computers to classify HSIs quickly enough for practical use in many applications, mainly because of the large number of calculations and parameters needed by deep learning-based methods. Although several weight quantization methods achieved remarkable results in network compression, the network acceleration effect is still not significant because a full exploration of the potential of network acceleration brought by network weight quantization is still absent from the literature. In this article, a new step activation quantization method is proposed to constrain the input of the network layer of the CNN so that the data can be represented by low-bit integers. As a result, floating-point operations can be replaced with integer operations to greatly accelerate the forward (inference) step of the network. Specifically, nonlinear uniform quantization is adopted in this work to restrain the input of the CNN in the forward inference of the step activation quantization layer, and two functions (constant and tanh-like) are used in the backpropagation step to avoid gradient vanishing and noise. Our newly proposed step activation quantization acceleration method is applied to a CNN for HSI with two well-known benchmark data sets and the experimental results demonstrate that the proposed method is very effective in terms of both memory savings and computation acceleration, with only a slight decrease in classification accuracy. Specifically, our method reduces memory requirements in 13.6\times and obtains around 10\times {} speedup with regard to the original real-valued network version.
Article Sequence Number: 5502012
Date of Publication: 24 February 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.