Abstract
In deep neural networks, using more layers and parameters generally improves the accuracy of the models, which get bigger. Such big models have high computational complexity and big memory requirements, which exceed the capacity of small devices for inference. Knowledge distillation is an efficient approach to compress a large deep model (a teacher model) to a compact model (a student model). Existing online knowledge distillation methods typically exploit an extra data storage layer to store the knowledge or deploy the teacher model and the student model at the same computing resource, thus hurting elasticity and fault-tolerance. In this paper, we propose an elastic deep learning framework, EDL-Dist, for large scale knowledge distillation to efficiently train the student model while exploiting elastic computing resources. The advantages of EDL-Dist are three-fold. First, it decouples the inference and the training process to use heterogeneous computing resources. Second, it can exploit dynamically available computing resources. Third, it supports fault-tolerance during the training and inference processes within knowledge distillation. Our experimental validation, based on industrial-strength implementation and real datasets, shows that the throughput of EDL-Dist is up to 181% faster than the baseline method (online knowledge distillation).
D. Dong and J. Liu—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
Paddle Fleet: https://github.com/PaddlePaddle/FleetX.
- 3.
- 4.
Redis: https://redis.io/.
References
Anil, R., Pereyra, G., Passos, A., Ormándi, R., Dahl, G.E., Hinton, G.E.: Large scale distributed neural network training through online distillation. In: International Conference on Learning Representations (ICLR) (2018). https://openreview.net/forum?id=rkr1UDeC-
Deng, J., Dong, W., Socher, R., Li, L., Li, K., Li, F.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009). https://doi.org/10.1109/CVPR.2009.5206848
Gibiansky, A.: Bringing HPC techniques to deep learning (2017). https://andrew.gibiansky.com/blog/machine-learning/baidu-allreduce/. Accessed 12 Aug 2020
Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: a survey. CoRR abs/2006.05525 (2020). https://arxiv.org/abs/2006.05525
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop (2015). http://arxiv.org/abs/1503.02531
Hunt, P., Konar, M., Junqueira, F.P., Reed, B.: ZooKeeper: wait-free coordination for internet-scale systems. In: USENIX Annual Technical Conference, p. 11 (2010)
Koonce, B.: MobileNetV3, pp. 125–144. Apress (2021)
Liu, J., Bondiombouy, C., Mo, L., Valduriez, P.: Two-phase scheduling for efficient vehicle sharing. IEEE Trans. Intell. Transp. Syst. 23, 457–470 (2020)
Liu, J., et al.: From distributed machine learning to federated learning: a survey. arXiv preprint arXiv:2104.14362 (2021)
Liu, J., et al.: Efficient scheduling of scientific workflows using hot metadata in a multisite cloud. IEEE Trans. Knowl. Data Eng. 31(10), 1940–1953 (2019). https://doi.org/10.1109/TKDE.2018.2867857
Liu, L., Yu, H., Sun, G., Luo, L., Jin, Q., Luo, S.: Job scheduling for distributed machine learning in optical wan. Futur. Gener. Comput. Syst. 112, 549–560 (2020). https://doi.org/10.1016/j.future.2020.06.007
Ma, Y., Wu, T., Yu, D., Wang, H.: PaddlePaddle: an open-source deep learning platform from industrial practice. Front. Data Comput. 1(1), 105 (2019). https://doi.org/10.11871/jfdc.issn.2096.742X.2019.01.011
Özsu, M.T., Valduriez, P.: Principles of Distributed Database Systems, 4th edn. Springer, Heidelberg (2020). https://doi.org/10.1007/978-3-030-26253-2
Sun, Y., et al.: ERNIE 3.0: large-scale knowledge enhanced pre-training for language understanding and generation. arXiv preprint arXiv:2107.02137 (2021)
Villegas, R., Yang, J., Zou, Y., Sohn, S., Lin, X., Lee, H.: Learning to generate long-term future via hierarchical prediction. In: International Conference on Machine Learning (ICML), vol. 70, pp. 3560–3569 (2017)
Zmora, N., Jacob, G., Zlotnik, L., Elharar, B., Novik, G.: Neural network distiller: a python package for DNN compression research. CoRR abs/1910.12232 (2019). http://arxiv.org/abs/1910.12232
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Dong, D. et al. (2022). Elastic Deep Learning Using Knowledge Distillation with Heterogeneous Computing Resources. In: Chaves, R., et al. Euro-Par 2021: Parallel Processing Workshops. Euro-Par 2021. Lecture Notes in Computer Science, vol 13098. Springer, Cham. https://doi.org/10.1007/978-3-031-06156-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-06156-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06155-4
Online ISBN: 978-3-031-06156-1
eBook Packages: Computer ScienceComputer Science (R0)