Abstract:
Extensive convolutional neural network (CNN)-based methods have been widely used in remote sensing scene classification. However, the dense operation and huge memory stor...Show MoreMetadata
Abstract:
Extensive convolutional neural network (CNN)-based methods have been widely used in remote sensing scene classification. However, the dense operation and huge memory storage of the state-of-the-art models hinder their deployment on low-power embedded devices. In this letter, we propose a mixed-precision quantization method to compress the model size without accuracy degradation. In this method, we propose a symmetric nonlinear quantization scheme to reduce the quantization error. A corresponding three-step training strategy is proposed to improve the performance of the quantized network. Finally, based on the proposed scheme and training strategy, we propose a neural architecture search (NAS)-based quantization bit-width search (NQBS) method. This method can automatically select a bit width for each quantized layer to obtain a mixed-precision network with an optimal model size. We apply the proposed method to the ResNet-34 and SqueezeNet networks and evaluate the quantized networks on the NWPU-RESISC45 data set. The experimental results show that the mixed-precision quantized networks under the proposed method strike a satisfying tradeoff between classification accuracy and model size.
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 18, Issue: 10, October 2021)