Abstract:
Today, adoption of Machine Learning (ML) techniques is widespread and is encountered in almost every aspect of our everyday lives. The plethora of IoT devices and the eno...Show MoreMetadata
Abstract:
Today, adoption of Machine Learning (ML) techniques is widespread and is encountered in almost every aspect of our everyday lives. The plethora of IoT devices and the enormous amounts of data that are being generated has led to the evolution of existing and the development of new learning algorithms that aim in leveraging data for driving accurate inference and automated decision-making. In parallel, with the computing continuum paradigm, where heterogeneous and distributed resources are stretching all the way from Cloud to Edge and IoT devices, it is imperative that ML applications reap the performance and efficiency benefits of heterogeneity. However, ML application developers and data scientists are faced with the burden of manually deploying their applications in a manner that is frequently sub-optimal.In this paper, we design and implement an integrated MLOps framework that, initially, enables developers to decompose an ML workflow into its functional steps, which correspond to distinct stages of the development and execution of an ML model. Our developed scheduler is therefore able to efficiently schedule these individual components by considering their specific requirements for computing capacity during the training stage, and for low network latency during data ingestion and model serving. The proposed MLOps framework is evaluated with a proof-of-concept experimentation conducted in a realistic testbed environment. Results show significant benefits in performance, when compared with scheduling the whole ML workflow.
Date of Conference: 28-30 November 2022
Date Added to IEEE Xplore: 27 February 2023
ISBN Information: