Abstract:
In the last decade Cloud computing has seen a surge of popularity. Clouds, with their scale and high functionality, are used to outsource various infrastructure, platform...Show MoreMetadata
Abstract:
In the last decade Cloud computing has seen a surge of popularity. Clouds, with their scale and high functionality, are used to outsource various infrastructure, platform, and software services. However, relaying solely on distant Cloud Data Centers (DCs) can be inefficient for many applications concerning mobile devices and Internet of Things (IoT) in general. A more decentralized Fog computing paradigm has been proposed to augment Cloud availability and execution. This work addresses latency and power consumption in Fog computing networks. Models for power consumption and delay are proposed. Performance of Fog computing is estimated using parameters setting based on real-world equipment and traffic. Our results tackle the balance between Fog and Cloud. Applications requiring heavy computations (relative to size of offloaded data) are best served by Cloud DCs, while it is faster (and more power-efficient) to compute “lighter” requests in the Fog Nodes (FNs). However, where is the trade-off between power consumption and delay in the context of Fog and Cloud? We answer this question modeling multiple architectures and using various network scenarios.
Published in: 2019 IEEE Sustainability through ICT Summit (StICT)
Date of Conference: 18-19 June 2019
Date Added to IEEE Xplore: 08 August 2019
ISBN Information: