skip to main content
10.1145/3578356acmconferencesBook PagePublication PageseurosysConference Proceedingsconference-collections
EuroMLSys '23: Proceedings of the 3rd Workshop on Machine Learning and Systems
ACM2023 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
EuroMLSys '23: 3rd Workshop on Machine Learning and Systems Rome Italy 8 May 2023
ISBN:
979-8-4007-0084-2
Published:
08 May 2023
Sponsors:
Next Conference
April 22 - 25, 2024
Athens , Greece
Bibliometrics
Abstract

No abstract available.

Skip Table Of Content Section
research-article
Open Access
Actionable Data Insights for Machine Learning

Artificial Intelligence (AI) and Machine Learning (ML) have made tremendous progress in the recent decade and have become ubiquitous in almost all application domains. Many recent advancements in the ease-of-use of ML frameworks and the low-code model ...

research-article
Open Access
Towards A Platform and Benchmark Suite for Model Training on Dynamic Datasets

Machine learning (ML) is often applied in use cases where training data evolves and/or grows over time. Training must incorporate data changes for high model quality, however this is often challenging and expensive due to large datasets and models. In ...

research-article
Profiling and Monitoring Deep Learning Training Tasks

The embarrassingly parallel nature of deep learning training tasks makes CPU-GPU co-processors the primary commodity hardware for them. The computing and memory requirements of these tasks, however, do not always align well with the available GPU ...

research-article
Open Access
MCTS-GEB: Monte Carlo Tree Search is a Good E-graph Builder

Rewrite systems [11, 16, 18] have been widely employing equality saturation [15], which is an optimisation methodology that uses a saturated e-graph to represent all possible sequences of rewrite simultaneously, and then extracts the optimal one. As ...

research-article
Decentralized Learning Made Easy with DecentralizePy

Decentralized learning (DL) has gained prominence for its potential benefits in terms of scalability, privacy, and fault tolerance. It consists of many nodes that coordinate without a central server and exchange millions of parameters in the ...

research-article
Towards Practical Few-shot Federated NLP

Transformer-based pre-trained models have emerged as the predominant solution for natural language processing (NLP). Fine-tuning such pre-trained models for downstream tasks often requires a considerable amount of labeled private data. In practice, ...

research-article
Towards Robust and Bias-free Federated Learning

Federated learning (FL) is an exciting machine learning approach where multiple devices collaboratively train a model without sharing their raw data. The FL system is vulnerable to the action of Byzantine clients sending arbitrary model updates, and ...

research-article
Open Access
Gradient-less Federated Gradient Boosting Tree with Learnable Learning Rates

The privacy-sensitive nature of decentralized datasets and the robustness of eXtreme Gradient Boosting (XGBoost) on tabular data raise the needs to train XGBoost in the context of federated learning (FL). Existing works on federated XGBoost in the ...

research-article
Distributed Training for Speech Recognition using Local Knowledge Aggregation and Knowledge Distillation in Heterogeneous Systems

Data privacy and data protection are crucial issues for automatic speech recognition (ASR) system when relying on client generated data for training. The best protection is achieved when training is distributed fashion, close to the client local data, ...

research-article
FoldFormer: sequence folding and seasonal attention for fine-grained long-term FaaS forecasting

Fine-grained long-term (FGLT) time series forecasting is a fundamental challenge in Function as a Service (FaaS) platforms. The data that FaaS function requests produce are fine-grained (per-second/minute), often have daily periodicity, and are ...

research-article
Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems

The use of machine learning (ML) inference for various applications is growing drastically. ML inference services engage with users directly, requiring fast and accurate responses. Moreover, these services face dynamic workloads of requests, imposing ...

research-article
Robust and Tiny Binary Neural Networks using Gradient-based Explainability Methods

Binary neural networks (BNNs) are a highly resource-efficient variant of neural networks. The efficiency of BNNs for tiny machine learning (TinyML) systems can be enhanced by structured pruning and making BNNs robust to faults. When used with ...

research-article
Open Access
Illuminating the hidden challenges of data-driven CDNs

While Data-driven CDNs have the potential to provide unparalleled performance and availability improvements, they open up an intricate and exciting tapestry of previously un-addressed problems. This paper highlights these problems, explores existing ...

research-article
Best of both, Structured and Unstructured Sparsity in Neural Networks

Besides quantization, pruning has shown to be one of the most effective methods to reduce the inference time and required energy of Deep Neural Networks (DNNs). In this work, we propose a sparsity definition that reflects the number of saved ...

research-article
TSMix: time series data augmentation by mixing sources

Data augmentation for time series is challenging because of the complex multi-scale relationships spanning ordered continuous sequences: one cannot easily alter a single datum and expect these relationships to be preserved. Time series datum are not ...

research-article
Toward Pattern-based Model Selection for Cloud Resource Forecasting

Cloud resource management solutions, such as autoscaling and overcommitment policies, often leverage robust prediction models to forecast future resource utilization at the task-, job- and machine-level. Such solutions maintain a collection of ...

research-article
Open Access
A First Look at the Impact of Distillation Hyper-Parameters in Federated Knowledge Distillation

Knowledge distillation has been known as a useful way for model compression. It has been recently adopted in the distributed training domain, such as federated learning, as a way to transfer knowledge between already pre-trained models. Knowledge ...

research-article
Open Access
Can Fair Federated Learning Reduce the need for Personalisation?

Federated Learning (FL) enables training ML models on edge clients without sharing data. However, the federated model's performance on local data varies, disincentivising the participation of clients who benefit little from FL. Fair FL reduces ...

research-article
Open Access
Causal fault localisation in dataflow systems

Dataflow computing was shown to bring significant benefits to multiple niches of systems engineering and has the potential to become a general-purpose paradigm of choice for data-driven application development. One of the characteristic features of ...

research-article
TinyMLOps for real-time ultra-low power MCUs applied to frame-based event classification

TinyML applications such as speech recognition, motion detection, or anomaly detection are attracting many industries and researchers thanks to their innovative and cost-effective potential. Since tinyMLOps is at an even earlier stage than MLOps, the ...

research-article
Scalable High-Performance Architecture for Evolving Recommender System

Recommender systems are expected to scale to the requirement of the large number of recommendations made to the customers and to keep the latency of recommendations within a stringent limit. Such requirements make architecting a recommender system a ...

research-article
Accelerating Model Training: Performance Antipatterns Eliminator Framework

In the realm of ML/DL training pipelines, the training-specific data preparation of complex models may consume up to 87% of the total training time. A data scientist may build training pipelines using Python data structures on GPU while being unaware ...

Contributors
  • University of Cambridge
  • Lund University

Recommendations

Acceptance Rates

Overall Acceptance Rate18of26submissions,69%
YearSubmittedAcceptedRate
EuroMLSys '21261869%
Overall261869%