Research ArticleModeling and evaluating a complex edge computing based systems: An emergency management support system case study
Introduction
Cloud infrastructures introduced a significant paradigm shift in computing architectures as they offered the opportunity to have access to large amounts of computing power and storage space with a pay-per-use, on-demand-based approach. However, despite the autonomous access to computational resources allows an unprecedented flexibility (as one can get what is needed when is needed), on the other hand, the cloud-based approach suffers from an inherent limitation: a heavy dependency from the network. Indeed, one can get as much resources as needed, thanks to the strongly centralized architecture and the scale, but the centralization implies remoting. This translates into a strong difficulty in transforming the on-demand-based access to resources paradigm into a on-demand-based performance paradigm. Indeed, while for batch-like workloads or common user-oriented interaction-based applications, the access to (potentially unconfined) pay-per-use-, on-demand-based resources might allow for a parallel scale up of performances, for mission-critical applications, for which both computing power and very short communication turnaround time (or high data throughput, or connection continuity, or data security) are required, this opportunity might be difficultly exploited. For this latter type of applications, old, good local-based computing architectures outperform cloud-based architectures in terms of design, development and maintenance costs. Having local, always available, high performance computing resources ensures controllable performance levels; yet it requires that all functionalities are locally executed, and that the system is able to handle without problems the (usually not frequent) peaks of workload. Yet, local resources might be synonym of unaffordability as their costs are much higher, both in terms of purchase and maintenance, than those offered by the cloud-based solutions.
To avoid that Total Cost of Ownership (TCO) becomes a substantial limitation, a compromised solution is provided by both edge computing and fog computing paradigms, as they both overcome the limitations of cloud-based architectures by adding resources (such as fixed or mobile computing, local servers or even small on-site data centers) at the users edge. The advantage of such approaches is the possibility to execute in the cloud those non-critical workloads and at the edge those tasks that might be negatively influenced by the network performance, or that have strict availability and continuity constraints. Basically, the idea behind the edge and fog computing paradigms is pre-processing large amounts of data locally before they are sent to the cloud with the ultimate goal of lowering the bandwidth requirements. This is the typical case of emergency situations, where local decision makers need to have access to locally collected and rapidly processed data to face the emergency situation, while remote decision makers need to be kept up-to-date on the evolution of the accident to create the conditions for the local (first) responders to act efficiently (e.g., by sending them further resources, by preventing the traffic from moving towards the emergency area). In this typical situation, the sensors locally produce large amounts of data (e.g., collected through drones, wearable sensors, network sensors). These data need to be rapidly and locally processed to provide results that are useful to decide how to respond to the dynamic evolution of the scenario that is taking place. For instance, it is possible to provide the dynamic evolution of a toxic cloud in relation to the weather conditions, where these data can be processed by edge resources. On the other hand, the remote decision makers need to have access to the same results with a much lower urgency as their decision timings are much longer with respect to those of the local decision makers (e.g., in the order of hours instead of minutes). Under this context, the edge computing paradigm perfectly suits the purpose as data can be collected and processed locally (for local decision makers) and then shared remotely via a cloud infrastructure (to suit remote decision makers). This is an optimal scheme as, for the specific example of the evolution of a toxic cloud, the data to be sent remotely to update the shape of the iso-concentration curves that are displayed to the remote decision makers are much lower than those needed to calculate the iso-concentration curves, which allow the network-related constraints not being an impediment. From the point of view of complexity, according to the computational needs, the collected data volumes, the non functional-requirements, the edge subsystem may be itself a heterogeneous distributed system, eventually partially implemented with smart and dumb mobile nodes.
Under this context, there is the need for the definition of relevant system level and subsystem level performance metrics, and proper models for design and prediction of performances. As edge subsystems may be complex and a system may include many of them, performance metrics should be able to significantly describe the actual service level of the system and its subsystems, and must encompass a number of different needs that depend on the application. System scalability is also a relevant metrics and must consider the nature of the application, in order to match the ways in which the system may be scaled up beyond the bare workload. In this paper, we propose a performance evaluation approach to support the design of a support system for complex emergencies. The system consists of a high performance, high available critical edge frontend, including a local server, mobile nodes, sensor networks and real time augmented reality (AR) devices, and a high workload cloud backend, that continuously interact with each other to implement an emergency management information system. We are interested in defining and evaluating performance meters and related design and evaluation models for each component of each subsystem and for the system as a whole. The system architecture implemented in this work is borrowed from a recent study conducted by Italian fire fighters [1], which proposed an IoT module to manage devices in complex emergency scenarios. We analyzed its behavior by using multiclass queuing models, and obtained results that are not trivial, notwithstanding the ordinary modeling approach, and justify the need for performance prediction models.
This paper extends a previously published short paper [2] by providing a more detailed description of the proposed system, an assessment of the security and dependability of the architecture and a more extended analysis of performance evaluation.
The paper is organized as follows: Section 2 provides related works, Section 3 introduces the main topics related to the case study, Section 4 describes the reference architecture of the edge system to be evaluated, Section 5 analyzes the security aspects of the reference architecture, Section 7 details the modeling approach and presents the results, and conclusions follow in Section 8.
Section snippets
Related works
Edge computing allows joining together the potential offered by three important domains: cloud computing, mobile computing and Internet of Things (IoT). The goal is to find an equilibrium between the centripetal force towards huge, remote computing systems generated by the cloud, the centrifugal force towards localization of computing on private, powerful and inexpensive mobile devices and the fragmentation and dispersion typical of IoT. This has to be implemented by means of flexible and
Timely response communication systems during CBRN emergencies
In order to cope with medium-large scale accidents, whether these were originated by a perpetrator (e.g., terrorists) or occurred naturally (e.g., natural disasters), in a fast and highly coordinated manner, first responders need to count on updated information on the ongoing situation. Indeed, to be effective (and efficient) first responders have to act coordinately both “internally”, i.e., within the organization (intra-coordination), and “externally”, i.e., amongst organizations
A reference architecture
We propose a system to support the action of fire teams in case of large-scale complex emergencies, such as CBRN scenarios. The system aims at supporting multiple squads cooperating in large critical scenarios, in loose coordination with other authorities (e.g., national security, police, medical teams, anti-terrorism, special forces) and with the assistance of technological devices (e.g., AR supports, IoT sensors, drones, smart glasses, but also standard security cameras usually positioned
Computer security considerations
Due to the critical aspects of the proposed architecture, that may impact on the success of a mission and on life of operators, an analysis of computer security aspects is important to ensure that introducing a more complex support for missions that replace human communication to increase situation awareness and volumes and kind of available data will not significantly introduce security threats. Other aspects of security, including physical security of the operators, such as all provisions
Reliability considerations
The proposed system (and its architecture) has been conceptualized based on the users need (i.e., the first responders) and, as such, it has to be tested against and, most of all, improved according to a sound risk engineering activity (a paper will follow in this respect). Nevertheless, some thoughts have already been given in that direction to start highlighting some potential criticalities that ought to be designed out in order to make the proposed system inherently safe and secure. First of
The case study
The proposed case study is a system based on the architecture reported in Fig. 2. The infrastructure enables the mapping of various aspects of a large-scale emergency collecting and transferring data from sensors carried by first responders, remote operating vehicles, or network sensors. These data are used to monitor phenomena that cannot be easily measured by a single individual or unit. The resulting information, processed and assembled in real-time is then shared among all professionals
Conclusions
In this paper we have presented an interesting example of edge computing application modeled on an IoT architecture of great interest for the management of complex emergencies. Due to its nature of critical system, a first approach discussion on security and reliability of the proposed solution has been presented, to provide the elements that justify and support the viability of further exploration and a more detailed analysis of the proposed design. Even if the system does not pose new
Conflict of interest
None.
References (32)
- et al.
A cloud-based architecture for emergency management and first responders localization in smart city environments
Comput. Electr. Eng.
(2016) - Space Fly Multiagent Project. Call ITT 8729 Space-Based Services in Support of CBRNe Operations,...
- et al.
Modeling and evaluating performances of complex edge computing based systems: A firefighting support system case study
Proceedings of the Eleventh EAI International Conference on Performance Evaluation Methodologies and Tools, VALUETOOLS 2017
(2017) - et al.
Edge computing: vision and challenges
IEEE Internet Things J.
(2016) - et al.
Pseudo-dynamic testing of realistic edge-fog cloud ecosystems
IEEE Commun. Mag.
(2017) The emergence of edge computing
Computer
(2017)- et al.
The promise of edge computing
Computer
(2016) - et al.
Extending cloud resources to the edge: possible scenarios, challenges, and experiments
Proceedings of the 2016 International Conference on Cloud Computing Research and Innovations (ICCCRI)
(2016) - et al.
Fog computing: helping the internet of things realize its potential
Computer
(2016) - et al.
Fog and IoT: an overview of research opportunities
IEEE Internet Things J.
(2016)
Challenges and software architecture for fog computing
IEEE Internet Comput.
Towards an autonomic approach for edge computing: Research articles
Concurr. Comput. Pract. Exp.
Osmotic computing: a new paradigm for edge/cloud integration
IEEE Cloud Comput.
Fog computing: platform and applications
Proceedings of the Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb)
Scalable architecture for an automated surveillance system using edge computing
J. Supercomput.
Bringing the cloud to the edge
Proceedings of the 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)
Cited by (17)
Performance evaluation of a fog WSN infrastructure for emergency management
2020, Simulation Modelling Practice and TheoryCitation Excerpt :The Edge server is in turn connected to the Cloud to provide information to the backoffice of the information management system of the general command, that integrates mission data with existing knowledge and interacts with other authorities in joint missions or coordinates all the squads involved in the operation, and to request the execution of complex computations that may be needed to extract additional knowledge from field data. For a more complete description of the system and of the personal equipment, the reader can refer to [13]. For the purposes of this paper, the Fog may be described in terms of three classes of nodes, namely Personal Support3 (PS), Simple Sensor (SS) and Intelligent Sensor (IS).
Internet-of-Things and fog-computing as enablers of new security and privacy threats
2019, Internet of Things (Netherlands)Analytical approaches to QoS analysis and performance modelling in fog computing
2023, Multi-Disciplinary Applications of Fog Computing: Responsiveness in Real-TimeEvaluating defense services performance on military cloud continuum systems
2023, 2023 IEEE International Workshop on Technologies for Defense and Security, TechDefense 2023 - Proceedings