Abstract
With the rapid development of Deep Learning, more and more applications on the cloud and edge tend to utilize large DNN (Deep Neural Network) models for improved task execution efficiency as well as decision-making quality. Due to memory constraints, models are commonly optimized using compression, pruning, and partitioning algorithms to become deployable onto resource-constrained devices. As the conditions in the computational platform change dynamically, the deployed optimization algorithms should accordingly adapt their solutions. To perform frequent evaluations of these solutions in a timely fashion, RMs (Regression Models) are commonly trained to predict the relevant solution quality metrics, such as the resulted DNN module inference latency, which is the focus of this paper. Existing prediction frameworks specify different RM training workflows, but none of them allow flexible configurations of the input parameters (e.g., batch size, device utilization rate) and of the selected RMs for different modules. In this paper, a deep learning module inference latency prediction framework is proposed, which i) hosts a set of customizable input parameters to train multiple different RMs per DNN module (e.g., convolutional layer) with self-generated datasets, and ii) automatically selects a set of trained RMs leading to the highest possible overall prediction accuracy, while keeping the prediction time/space consumption as low as possible. Furthermore, a new RM, namely MEDN (Multi-task Encoder-Decoder Network), is proposed as an alternative solution. Comprehensive experiment results show that MEDN is fast and lightweight, and capable of achieving the highest overall prediction accuracy and R-squared value. The Time/Space-efficient Auto-selection algorithm also manages to improve the overall accuracy by 2.5% and R-squared by 0.39%, compared to the MEDN single-selection scheme.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Banitalebi-Dehkordi, A., Vedula, N., Pei, J., Xia, F., Wang, L., Zhang, Y.: Auto-split: a general framework of collaborative edge-cloud AI. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. p. 2543-2553. KDD 2021, Association for Computing Machinery (2021). https://doi.org/10.1145/3447548.3467078
Bank, D., Koenigstein, N., Giryes, R.: Autoencoders (2021)
Brown, T.B., et al.: Language models are few-shot learners (2020)
Hu, C., Li, B.: Distributed inference with deep learning models across heterogeneous edge devices. In: IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, pp. 330–339 (2022). https://doi.org/10.1109/INFOCOM48880.2022.9796896
Kirillov, A., et al.: Segment anything. arXiv:2304.02643 (2023)
Kum, S., Oh, S., Yeom, J., Moon, J.: Optimization of edge resources for deep learning application with batch and model management. Sensors 22(17) (2022). https://doi.org/10.3390/s22176717, https://www.mdpi.com/1424-8220/22/17/6717
Lahiany, A., Aperstein, Y.: Pteenet: post-trained early-exit neural networks augmentation for inference cost optimization. IEEE Access 10, 69680–69687 (2022). https://doi.org/10.1109/ACCESS.2022.3187002
Li, E., Zeng, L., Zhou, Z., Chen, X.: Edge AI: on-demand accelerating deep neural network inference via edge computing (2019)
Lin, P., Shi, Z., Xiao, Z., Chen, C., Li, K.: Latency-driven model placement for efficient edge intelligence service. IEEE Trans. Serv. Comput. 15(2), 591–601 (2022). https://doi.org/10.1109/TSC.2021.3109094
Liu, G., Dai, F., Huang, B., Li, L., Wang, S., Qiang, Z.: Towards accurate latency prediction of DNN layers inference on diverse computing platforms. In: 2022 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), pp. 1–7 (2022). https://doi.org/10.1109/DASC/PiCom/CBDCom/Cy55231.2022.9927862
Mao, J., Chen, X., Nixon, K.W., Krieger, C., Chen, Y.: MoDNN: local distributed mobile computing system for deep neural network. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. pp. 1396–1401 (2017). https://doi.org/10.23919/DATE.2017.7927211
Mendoza, D.: Predicting latency of neural network inference (2020)
Paszke, A., et al.: PyTorch: An Imperative Style. High-Performance Deep Learning Library. Curran Associates Inc., Red Hook (2019)
Shao, J., Zhang, H., Mao, Y., Zhang, J.: Branchy-GNN: a device-edge co-inference framework for efficient point cloud processing (2023)
Shi, W., Zhou, S., Niu, Z., Jiang, M., Geng, L.: Multiuser co-inference with batch processing capable edge server. IEEE Trans. Wireless Commun. 22(1), 286–300 (2023). https://doi.org/10.1109/TWC.2022.3192613
Tang, X., Chen, X., Zeng, L., Yu, S., Chen, L.: Joint multiuser DNN partitioning and computational resource allocation for collaborative edge intelligence. IEEE Internet Things J. 8(12), 9511–9522 (2021). https://doi.org/10.1109/JIOT.2020.3010258
Teerapittayanon, S., McDanel, B., Kung, H.: Distributed deep neural networks over the cloud, the edge and end devices. In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pp. 328–339 (2017). https://doi.org/10.1109/ICDCS.2017.226
Zeng, L., Chen, X., Zhou, Z., Yang, L., Zhang, J.: Coedge: cooperative DNN inference with adaptive workload partitioning over heterogeneous edge devices. IEEE/ACM Trans. Networking 29(2), 595–608 (2021). https://doi.org/10.1109/TNET.2020.3042320
Zhang, L.L., et al.: NN-meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices. In: Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services. MobiSys 2021, New York, NY, USA, pp. 81-93. Association for Computing Machinery (2021). https://doi.org/10.1145/3458864.3467882, https://doi.org/10.1145/3458864.3467882
Acknowledgements
This research was supported by: Shenzhen Science and Technology Program, China (No. GJHZ20210705141807022); Guangdong Province Innovative and Entrepreneurial Team Programme, China (No. 2017ZT07X386); SUSTech Research Institute for Trustworthy Autonomous Systems, China. Corresponding author: Georgios Theodoropoulos.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 IFIP International Federation for Information Processing
About this paper
Cite this paper
Shen, J., Tziritas, N., Theodoropoulos, G. (2024). Towards a Flexible Accuracy-Oriented Deep Learning Module Inference Latency Prediction Framework for Adaptive Optimization Algorithms. In: Shi, Z., Torresen, J., Yang, S. (eds) Intelligent Information Processing XII. IIP 2024. IFIP Advances in Information and Communication Technology, vol 703. Springer, Cham. https://doi.org/10.1007/978-3-031-57808-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-57808-3_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-57807-6
Online ISBN: 978-3-031-57808-3
eBook Packages: Computer ScienceComputer Science (R0)