Abstract
We propose compact and effective network layer Rotational Duplicate Layer (RDLayer) that takes the place of regular convolution layer resulting up to 128\(\times \) in memory saving. Along with network accuracy, memory and power constraints affect design choices of computer vision tasks performed on resource-limited devices such as FPGAs (Field Programmable Gate Array). To overcome this limited availability, RDLayers are trained in a way that whole layer parameters are obtained from duplication and rotation of smaller learned kernel. Additionally, we speed up the forward pass via partial decompression methodology for data compressed with JPEG(Joint Photograpic Expert Group)2000. Our experiments on remote sensing scene classification showed that our network achieves \(\sim \)4\(\times \) reduction in model size in exchange of \(\sim \)4.5\(\%\) drop in accuracy, \(\sim \)27\(\times \) reduction with the cost of \(\sim \)10\(\%\) drop in accuracy, along with \(\sim \)2.6\(\times \) faster evaluation time on test samples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cheng, G., Han, J., Xiaoqiang, L.: Remote sensing image scene classification: benchmark and state of the art. Proc. IEEE 105(10), 1865–1883 (2017)
Cheng, G., Yang, C., Yao, X., Guo, L., Han, J.: When deep learning meets metric learning: remote sensing image scene classification via learning discriminative CNNs. IEEE Trans. Geosci. Remote Sens. 56(5), 2811–2821 (2018)
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)
Gao, H., et al.: DupNet: towards very tiny quantized CNN with improved accuracy for face detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)
Gong, Y., Liu, L., Yang, M., Bourdev, L.: Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115 (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015)
Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. CoRR, abs/1608.06993 (2016)
Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50\(\times \) fewer parameters and \(<\)0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)
Liu, C., et al.: Circulant binary convolutional networks: enhancing the performance of 1-bit DCNNs with circulant back propagation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2691–2699 (2019)
Liu, S., Deng, W.: Very deep convolutional neural network based image classification using small training sample size. In: 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), pp. 730–734 (2015)
Xiaoqiang, L., Sun, H., Zheng, X.: A feature aggregation convolutional neural network for remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 57(10), 7894–7906 (2019)
Byju, A.P., Sumbul, G., Demir, B., Bruzzone, L.: Remote-sensing image scene classification with deep neural networks in jpeg 2000 compressed domain. IEEE Trans. Geosci. Remote Sens. 59(4), 3458–3472 (2021)
Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015)
Xia, G.-S., et al.: AID: a benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 55(7), 3965–3981 (2017)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Zhang, J., Chaoquan, L., Li, X., Kim, H.-J., Wang, J.: A full convolutional network based on DenseNet for remote sensing scene classification. Math. Biosci. Eng. 16, 3345–3367 (2019)
Acknowledgment
We would like to thank our colleagues from ASELSAN for their support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Akkul, E.S., Arıcan, B., Töreyin, B.U. (2021). Bi-RDNet: Performance Enhancement for Remote Sensing Scene Classification with Rotational Duplicate Layers. In: Wojtkiewicz, K., Treur, J., Pimenidis, E., Maleszka, M. (eds) Advances in Computational Collective Intelligence. ICCCI 2021. Communications in Computer and Information Science, vol 1463. Springer, Cham. https://doi.org/10.1007/978-3-030-88113-9_54
Download citation
DOI: https://doi.org/10.1007/978-3-030-88113-9_54
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88112-2
Online ISBN: 978-3-030-88113-9
eBook Packages: Computer ScienceComputer Science (R0)