Abstract
An emerging trend to improve the power efficiency of neural network computations consists of dynamically adapting the network architecture or parameters to different inputs. In particular, many such dynamic network models are able to output ’easy’ samples at early exits if a certain confidence-based criterion is satisfied. Traditional methods to estimate inference confidence of a monitored neural network, or of intermediate predictions thereof, include the maximum element of the SoftMax output (score), or the difference between the largest and the second largest score values (score margin). Such methods only rely on a small and position-agnostic subset of the available information at the output of the monitored neural network classifier. For the first time, this paper reports on the lessons learned while trying to extrapolate confidence information from the whole distribution of the classifier outputs rather than from the top scores only. Our experimental campaign indicates that capturing specific patterns associated with misclassifications is nontrivial due to counterintuitive empirical evidence. Rather than disqualifying the approach, this paper calls for further fine-tuning to unfold its potential, and is a first step toward a systematic assessment of confidence-based criteria for dynamically-adaptive neural network computations.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. CoRR abs/2010.11929 (2020). https://arxiv.org/abs/2010.11929
Du, B.Z., Guo, Q., Zhao, Y., Zhi, T., Chen, Y., Xu, Z.: Self-aware neural network systems: a survey and new perspective. Proc. IEEE 108(7), 1047–1067 (2020). https://doi.org/10.1109/JPROC.2020.2977722
Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017. Proc. Mach. Learn. Res. 70, 1321–1330. PMLR (2017). http://proceedings.mlr.press/v70/guo17a.html
Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., Wang, Y.: Transformer in transformer. CoRR abs/2103.00112 (2021). https://arxiv.org/abs/2103.00112
Han, Y., Huang, G., Song, S., Yang, L., Wang, H., Wang, Y.: Dynamic neural networks: a survey. CoRR abs/2102.04906 (2021). https://arxiv.org/abs/2102.04906
Hein, M., Andriushchenko, M., Bitterwolf, J.: Why Relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 41–50 (2019)
Huang, G., Chen, D., Li, T., Wu, F., van der Maaten, L., Weinberger, K.Q.: Multi-scale dense networks for resource efficient image classification (2018)
Huang, G., Liu, S., Maaten, L.v.d., Weinberger, K.Q.: CondenseNet: an efficient denseNet using learned group convolutions. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2752–2761 (2018). https://doi.org/10.1109/CVPR.2018.00291
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. NIPS 2016, pp. 4114–4122. Curran Associates Inc., Red Hook (2016)
Jayakodi, N.K., Chatterjee, A., Choi, W., Doppa, J.R., Pande, P.P.: Trading-off accuracy and energy of deep inference on embedded systems: a co-design approach. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 37(11), 2881–2893 (2018). https://doi.org/10.1109/TCAD.2018.2857338
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791
Li, W., Liewig, M.: A survey of AI accelerators for edge environment. In: Rocha, Á., Adeli, H., Reis, L., Costanzo, S., Orovic, I., Moreira, F. (eds.) WorldCIST 2020. AISC, vol. 1160, pp. 35–44. Springer, Heidelberg (2020). https://doi.org/10.1007/978-3-030-45691-7_4
Lim, S., Liu, Y.P., Benini, L., Karnik, T., Chang, H.C.: F1: Striking the balance between energy efficiency flexibility: general-purpose vs special-purpose ML processors. In: 2021 IEEE International Solid- State Circuits Conference (ISSCC), vol. 64, pp. 513–516 (2021). https://doi.org/10.1109/ISSCC42613.2021.9365804
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2755–2763 (2017)
Park, E., Kim, D., Kim, S., Kim, Y.D., Kim, G., Yoon, S., Yoo, S.: Big/little deep neural network for ultra low power inference. In: 2015 International Conference on Hardware/Software Codesign and System Synthesis, CODES+ISSS 2015, pp. 124–132 (2015)
Tann, H., Hashemi, S., Bahar, R.I., Reda, S.: Runtime configurable deep neural networks for energy-accuracy trade-off. In: Proceedings of the IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis. CODES 2016. ACM, New York (2016)
Teerapittayanon, S., McDanel, B., Kung, H.T.: BranchyNet: fast inference via early exiting from deep neural networks. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2464–2469 (2016)
Veit, A., Belongie, S.: Convolutional networks with adaptive inference graphs. In: Proceedings of the European Conference on Computer Vision (ECCV), September 2018
Wang, K., Zhang, D., Li, Y., Zhang, R., Lin, L.: Cost-effective active learning for deep image classification. CoRR abs/1701.03551 (2017). http://arxiv.org/abs/1701.03551
Woźniak, M., Graña, M., Corchado, E.: A survey of multiple classifier systems as hybrid systems. Inf. Fusion 16, 3–17 (2014). https://doi.org/10.1016/j.inffus.2013.04.006, https://www.sciencedirect.com/science/article/pii/S156625351300047X, (Special issue on Inf. Fusion Hybrid Intell. Fusion Syst.)
Yang, L., Han, Y., Chen, X., Song, S., Dai, J., Huang, G.: Resolution adaptive networks for efficient inference. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2366–2375 (2020). https://doi.org/10.1109/CVPR42600.2020.00244
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Dall’Occo, F., Bueno-Crespo, A., Abellán, J.L., Bertozzi, D., Favalli, M. (2022). The Challenge of Classification Confidence Estimation in Dynamically-Adaptive Neural Networks. In: Orailoglu, A., Jung, M., Reichenbach, M. (eds) Embedded Computer Systems: Architectures, Modeling, and Simulation. SAMOS 2021. Lecture Notes in Computer Science, vol 13227. Springer, Cham. https://doi.org/10.1007/978-3-031-04580-6_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-04580-6_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-04579-0
Online ISBN: 978-3-031-04580-6
eBook Packages: Computer ScienceComputer Science (R0)