This special issue collects extended versions of four of the papers that have received the best scores during the review process of the 3rd IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS 2022). In this introduction, we are going to frame the papers in the general ACSOS context.
The IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS) is the merger of the two well-established IEEE conferences on Autonomic Computing (ICAC) and Self-adaptive and Self-Organising Systems (SASO). It continues the tradition of both conferences to bridge conceptual and applied research in the areas of Autonomic/Organic Computing, self-adaptive and self-organizing systems, which is reflected in the main technical program. Papers on the fundamentals of self-organization and self-adaptation are complemented by those on the topics of open systems, learning and modeling, scalability, and user-centered computing as well as applications in cloud computing, cyber-physical systems, and resource management.
The ACSOS 2022 call for research papers, vision papers and experience reports attracted 55 abstracts, resulting in 43 paper submissions from around the globe. The program committee for the main track accepted 14 full papers, 1 vision paper, and 8 short papers. The acceptance rate of 25% is comparable to that of previous editions of SASO and ICAC, which demonstrates that ACSOS successfully continued the tradition as a premier venue for the dissemination of research in this area.
For this special issue, we invited the authors of the 10 full papers that received only positive feedback during the ACSOS review process to submit a substantial extension of their work. As a result, six submissions have been received. The evaluation process went through two to three rounds of reviews and resulted in the four papers presented here.
Similarly to what we observed in the presentations and discussions of the latest ACSOS editions, in this special issue a large emphasis is given to Artificial Intelligence (AI), which, from the viewpoint of adaptive systems, can play two different roles: on one side, AI-based components can offer the intelligence to make runtime decisions concerning some of the steps of the adaptation loop of software systems, thus acting as controllers of such systems; on the other side, AI-based systems need themselves to be managed in a control loop, especially when they are used in critical domains such as autonomous driving. The four papers that are part of this special issue are well divided in these two categories as we elaborate in the following.
The use of AI and Machine Learning (ML) techniques and models for supporting different aspects of self-adaptive and self-organizing systems is on the rise. A particular benefit of such data-driven techniques is their ability to leverage runtime data (logs, sensors, alerts, feedback) in decision making—a powerful tool against design time uncertainty. Two papers in our special issue stand as prime examples of how ML techniques can be embedded in the adaptation loop.
The paper titled “
A User Study on Explainable Online Reinforcement Learning for Adaptive Systems,” by Metzger et al., starts from the observation that
Reinforcement Learning (RL) is an ML approach increasingly used in self-adaptive systems for learning, on the fly, optimal adaptation actions or, in RL parlance,
policies. In
Deep RL algorithms, the learned knowledge is represented as an artificial neural network—a powerful model that scales and generalizes well but is essentially a black box, hence difficult to understand and maintain. To enhance the explainability of Deep RL for self-adaptive systems, in their original ACSOS 2021 paper the authors proposed XRL-DINE, an explainable RL technique. In this special issue extension, the focus is on the evaluation of the applicability and potential usefulness of XRL-DINE via an empirical user study. The study sees two groups of participants, with and without access to XRL-DINE, answering questions related to the runtime decisions made by Deep RL applied on Simulator for Web Infrastructure and Management, a popular self-adaptation exemplar. The study results indicate that participants find XRL-DINE both useful and usable in understanding decision-making of Deep RL. Increase in understanding brings increase in users’ trust—which is necessary for the operation of the next-generation self-adaptive and self-organizing systems.
The second paper in this category focuses on
self-protecting, a self-* property that similar to
self-healing,
self-optimizing, and
self-configuring indicates self-adaptive behavior. In their paper titled “
Self-Supervised Machine Learning Framework for Online Container Security Attack Detection,” Tunde-Onadele et al. contribute a generic self-
supervised hybrid learning (SHIL) framework for detecting security attacks in containerized applications. As containers become more and more used in industry, their security is increasingly important. Online approaches for detection—the focus of this paper—and, eventually, adaptation in the form of counteractions are necessary for intrusion detection in container environments due to the inherent dynamicity of such environments (leading to static detection rules becoming quickly outdated). With SHIL, the authors combine unsupervised approaches that suffer from high false alarms with supervised approaches that need too many labels, aiming to strike a balance between the two. SHIL is evaluated by comparison with several alternative approaches in a comprehensive experimental setup comprising real world security attacks.
AI-based systems are nowadays an essential part of many critical systems. While the industrial interest in these systems is becoming more and more prominent, the need for keeping them under control and being able to adapt them when needed is of paramount importance. In this context, self-adaptation mechanisms and their instantiation in the AI/ML context are very interesting tools. Two of the special issue papers are focusing on this particular aspect.
The paper titled “
Anunnaki: A Modular Framework for Developing Trusted Artificial Intelligence,” by Langford et al., starts from the recognition that AI systems are more and more used for performing safety-critical tasks. As such, there is the need to develop proper mechanisms that guarantee their trustworthiness and, in particular, their resilience and robustness. In this context, Anunnaki is a model-driven framework that controls
Learning-Enabled Components (LECs) and ensures trustability of LECs even in the cases where different sources of uncertainty are present. Anunnaki is designed as a composable system and includes services focusing both on making the learning process more robust and on monitoring and controlling the runtime execution of LECs. The framework is evaluated in the context of two case studies focusing, respectively, on autonomous rovers and on unmanned aerial vehicles. The evaluation aims at assessing the ability of Anunnaki to work properly in different application domains and the effectiveness of the modular approach.
A more prominent emphasis on ML models and on optimizing the utility of the systems ML models are immersed into is highlighted in the paper by Casimiro et al. in the paper titled “
Self-Adapting Machine Learning-based Systems via a Probabilistic Model Checking Framework.” In this context, the main idea proposed by the authors is to define a statistical model-checking-based approach for synthesizing optimal adaptation strategies. Modeling ML components allows model checkers to reason about the impact of mispredictions on system utility. Mispredictions, in fact, are a common problem for ML systems. Information acquired from a continuously changing external environment leads to so called
out of distribution samples that ML components are unable to treat properly. To prevent mispredictions, ML models can be retrained, but it is important to be able to predict the general benefits that such retrain could offer. In the proposed approach, authors distinguish between the problem of assessing the impact of an adaptation tactic on an ML model and the estimation of the impact of predictions and mispredictions on the whole system utility. While the first problem can be tackled with a black box predictor based on historical data deriving from previous retrainings, the second one depends on the overall structure of the ML-based system and can be addressed with model-checking. The proposed approach is instantiated in the context of a fraud detection system.
We thank all ACSOS 2022 authors and reviewers who have been instrumental in the definition of the conference program and, in particular, those who have participated in the development of this special issue.
Politecnico di Milano, Milano, Italy
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
Topcy House Consulting, Thousand Oaks, CA, USA