Low-latency edge cooperation caching based on base station cooperation in SDN based MEC

https://doi.org/10.1016/j.eswa.2021.116252Get rights and content

Highlights

  • A low-latency edge caching method based on multi-base station cooperation is proposed.

  • A migration method based on balanced overhead of communication and migration is proposed.

  • The problem of minimizing latency is transformed into a problem of maximizing cache reward.

  • The reinforcement learning method is used to obtain service migration scheme.

  • The performances of the proposed algorithms are evaluated through experiments.

Abstract

With the increase of mobile terminal equipment and network mass data, users have higher requirements for delay and service quality. To reduce user access latency and more effectively cache diverse content in the edge network, a low-latency edge caching method is proposed. The cache model based on base station cooperation is established and the delay in different transmission modes is considered. Finally, the problem of minimizing latency is transformed into a problem of maximizing cache reward, and a greedy algorithm based on the original dual interior point is used to obtain the strategy of the original problem. Meanwhile, in order to improve service quality and balance communication overhead and migration overhead, a migration method based on balanced communication overhead and migration overhead is proposed. The model that balances communication overhead and migration overhead is established, and the reinforcement learning method is used to obtain a migration scheme that maximizes accumulated revenue. Comparison results show that our caching method can enhance the cache reward and reduce delay. Meanwhile, the migration algorithm can increase service migration revenue and reduce communication overhead.

Introduction

With the explosive growth of mobile Internet traffic, extending network virtualization to wireless networks is an effective method (Han, Gopalakrishnan, Ji, & Lee, 2015). Software-Defined Networking (SDN) is an effective technology for network virtualization. By separating data from control, SDN makes application upgrades and equipment upgrades independent of each other, speeding up the rapid deployment of new applications. Meanwhile, SDN makes the network more abstract, which simplifies the network model, and makes the control of the network more flexible. Moreover, SDN makes the control logic more centralized so that users can obtain global network information through the controller, thus, the network is optimized and network performance is improved (Mahmood, Butler, & Jennings, 2018). On this basis, Mobile Edge Computing (MEC) is introduced, and tasks that originally need to be uploaded to the cloud are stored on the edge, which can reduce latency.

Combining MEC and SDN can deploy computing resources, storage resources, and network resources to edge networks closer to users (Liu et al., 2017). Long-distance data transmission will bring a large transmission delay, and a large amount of diversified cache content cannot be stored on a server with limited buffer space. Therefore, edge cache technology is worthy of study. At the same time, due to the mobility of users, to improve service quality, the migration strategy is also worth studying.

There are some existing works about edge caching. To reduce the delay and backhaul load, Ndikumana, Ullah, LeAnh, Tran, and Hong (2017) proposed a data caching method based on MEC server cooperation, which can improve the utilization of system resources. In order to reduce the user's access delay, Zhang et al. (2018) proposed a data placement algorithm that optimizes bandwidth allocation, which can better reduce network delay. Mehrabi, Siekkinen, and Yla-Jaaski (2018) proposed a cache replacement method based on edge server collaboration, which can effectively improve the cache hit rate. Jiang, Ma, Bennis, Zheng, and You (2019) proposed a content popularity prediction algorithm based on user access behavior, which has low complexity and accurate prediction. Saputra et al. (2019) proposed a framework based on distributed deep learning, which predicts content more accurately and protects user information, so it is more secure. Jiang et al. (2019) proposed a method based on multi-agent reinforcement learning to obtain a caching strategy, which can effectively reduce user access latency. Chen, Liu, Zhao, and Zhu (2020) suggested an machine learning based edge collaborative caching, which can effectively reduce latency and enhance the utilization. Zhang et al. (2020) proposed a collaborative edge caching method based on deep reinforcement learning, which improves the cache hit rate and improves the utilization of system resources.

Although the above studies can achieve a good caching effect, they have not considered the cooperation between base stations. Since base stations are usually connected by high-speed optical fiber, the efficiency of the cache strategy can be improved. The cooperation between base stations is considered in this paper. If the local base station does not cache the content requested by the local user, other neighboring base stations in the network that have cached the requested content will transmit the requested content to the local base station. Therefore, the number of times to obtain content from the remote edge data center can be reduced through cooperative caching between base stations. Meanwhile, the transmission delay is reduced to improve the user experience.

There are some existing works about service migration. In order to allow users to communicate normally while they are moving, Machen, Wang, Leung, Ko, and Salonidis (2018) designed a layered framework for service migration behavior, which is highly flexible and stable. Wu, Chen, Zhou, and Chen (2019) proposed a migration method that balances migration overhead and non-migration delay, which can reduce migration cost and improve service quality. Gao et al. (2019) considered the use of the reinforcement learning method to obtain service migration strategies. This method considers the impact of the decision on the overall situation and optimizes the migration strategy. Wang, Ge, and Zhou (2020) proposed a service migration method based on protecting user location information. This method can protect user location privacy and obtain an effective migration strategy. Shi and Wang (2013) proposed a dynamic migration method, which can carry out efficient service migration at a lower cost and can flexibly respond to changes in user mobility patterns. Considering most of the existing works focus on a single migration algorithm, without applying Long Short-Term Memory (LSTM) into migration decision making. Jing et al. (2018) proposed an LSTM based service migration method, which can effectively predict the available memory and CPU resources.

Although the above work guarantees the continuity of services to a certain extent, they have not considered the balance of communication and migration overhead. Because the communication distance is too long, it will affect the quality of service and bring too much communication overhead. Meanwhile, frequent service migration will bring huge migration overhead, and will also bring greater pressure on servers and networks. Therefore, the proposed service migration method based on communication and migration cost balance can solve the above problems. Communication overhead and migration overhead are reduced, and the continuity of services is guaranteed to a certain extent, thereby improving user experience.

In order to effectively cache diverse content in the edge network and reduce latency, a low-latency edge caching method. In the proposed method, firstly, the edge collaborative caching model is established. Secondly, the delay in different transmission modes is considered and the corresponding formula is established. Finally, the problem of minimizing latency is transformed into a problem of maximizing cache reward, and a greedy algorithm based on the original dual interior point is used to obtain the strategy of the original problem. Meanwhile, to improve service quality and balance communication overhead and migration overhead, a service migration method based on balanced communication overhead and migration overhead is proposed. The model that balances communication overhead and migration overhead is established and the reinforcement learning method is used to obtain a migration scheme. The highlights of this paper are shown as follows.

  • (1)

    In order to effectively cache diverse content in the edge network and reduce user access latency, this paper proposed a low-latency edge caching method based on multi-base station cooperation. The problem of minimizing latency is transformed into a problem of maximizing cache reward in the method, thereby effectively reducing the user access latency.

  • (2)

    In order to solve the problem that the communication distance between the user and the server providing the ongoing service will become longer as the user moves, thereby reducing the quality of service, a migration method based on communication and migration overhead balance is proposed. A balanced overhead model of communication and migration is established, and the reinforcement learning method is used to obtain a migration scheme in the method, thereby a more reasonable migration strategy can be obtained.

The rest of this paper is structured as followings. Section 2 presents the low-latency edge caching method and service migration method. Section 3 presents the description of the proposed algorithms. Section 4 describes the process of experimental verification. Finally, conclusions and future work are given in Section 5.

Section snippets

The low-latency edge caching based on multi-base station cooperation

Aiming at the problem of high latency in response to user requests due to limited storage space in base stations, a low-latency edge caching method is proposed in this paper. When the local base station does not cache the file requested by the user, the neighboring base station that has cached the file requested by the user will transmit the file to the local base station where the user is located. If none of the neighboring base stations caches the file requested by the user, the file

The low-latency edge caching algorithm based on multi-base station cooperation

The pseudo-code of the low-latency edge caching algorithm is shown in Algorithm 1. Firstly, the static popularity θkb of the file is calculated according to formula (1). (Algorithm 1 line 1). Secondly, the optimal constraint variable eb,k,t¯,Yk and the continuous solution e¯b,k,t are obtained according to the primal–dual interior-point algorithm. (Algorithm 1 lines 2 ∼ 7). Finally, the maximum cache reward is calculated according to the continuous solution e¯b,k,t, and the best cache strategy

Experiment environment

This evaluation environment includes cloud data centers, edge cloud data centers, controllers, node devices, and mobile devices, and the evaluation environment architecture diagram is shown in Fig. 3. VMware virtual machine and Ubuntu 18.04 LTS operating system are installed on the 64-bit operating system. OpenvSwitch 2. 9.0 was selected as the SDN switch in this paper, and these switches follow the OpenFlow protocol. Meanwhile, the Floodlight controller was selected as the SDN controller to

Conclusions

In order to effectively cache diverse content in the edge network and reduce latency, a low-latency edge caching method is proposed. The cache model based on base station cooperation is established and the delay in different transmission modes is considered. Finally, the problem of minimizing latency is transformed into a problem of maximizing cache reward. Meanwhile, in order to improve service quality and reduce migration overhead, a service migration based on balanced communication overhead

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The work was supported by the National Natural Science Foundation of China (NSFC) under grants (No. 62171330, 61873341),Key Research and Development Plan of Hubei Province, China under grants (No. 2020BAB102), Open Project of Hubei Province Key Laboratory of Systems Science in Metallurgical Process(Z202002), Open Project of Wuhan University of Technology Chongqing Research Institute (ZL2021-4). Any opinions, findings, and conclusions are those of the authors and do not necessarily reflect the

References (42)

  • K. Avrachenkov et al.

    Optimization of caching devices with geometric constraints

    Performance Evaluation

    (2017)
  • N. Carlsson et al.

    Caching and optimized request routing in cloud-based content delivery systems

    Performance Evaluation

    (2014)
  • S.A. Krashakov et al.

    On the universality of rank distributions of website popularity

    Computer Networks

    (2006)
  • D. Navarro Guevara

    Primitive transcendental functions and symbolic computation

    Journal of Symbolic Computation

    (2018)
  • A. Arbel

    An Interior Multiple Objective Primal-Dual Linear Programming Algorithm Using Efficient Anchoring Points

    Journal of The Operational Research Society

    (1995)
  • B.N. Bharath et al.

    Caching With Time-Varying Popularity Profiles: A Learning-Theoretic Perspective

    IEEE Transactions on Communications

    (2018)
  • Y.u. Chen et al.

    Mobile Edge Cache Strategy Based on Neural Collaborative Filtering

    IEEE Access

    (2020)
  • B.J. Claessens et al.

    Convolutional Neural Networks for Automatic State-Time Feature Extraction in Reinforcement Learning Applied to Residential Load Control

    IEEE Transactions on Smart Grid

    (2018)
  • V. Eramo et al.

    An Approach for Service Function Chain Routing and Virtual Function Network Instance Migration in Network Function Virtualization Architectures

    IEEE ACM Transactions on Networking

    (2017)
  • Z.P. Gao et al.

    Deep Reinforcement Learning Based Service Migration Strategy for Edge Computing

    IEEE International Conference on Service-Oriented System Engineering (SOSE)

    (2019)
  • K. Guo et al.

    Caching in Base Station with Recommendation via Q-Learning

    IEEE Wireless Communications and Networking Conference (WCNC)

    (2017)
  • B.o. Han et al.

    Network function virtualization: Challenges and opportunities for innovations

    IEEE Communications Magazine

    (2015)
  • G. Hasslinger et al.

    Comparing Web Cache Implementations for Fast O(1) Updates Based on LRU, LFU and Score Gated Strategies

  • W. Jiang et al.

    Multi-Agent Reinforcement Learning Based Cooperative Content Caching for Mobile Edge Networks

    IEEE Access

    (2019)
  • Y. Jiang et al.

    User Preference Learning-Based Edge Caching for Fog Radio Access Network

    IEEE Transactions on Communications

    (2019)
  • H.F. Jing et al.

    LSTM-Based Service Migration for Pervasive Cloud Computing

  • T. Kashiwagi et al.

    Flexible and Efficient Partial Migration of Split-memory VMs

  • N. Ketkar Stochastic Gradient Descent 2017 111...
  • M. Lee et al.

    Learning to Branch: Accelerating Resource Allocation in Wireless Networks

    IEEE Transactions on Vehicular Technology

    (2020)
  • C. Liang et al.

    Enhancing QoE-Aware Wireless Edge Caching With Software-Defined Wireless Networks

    IEEE Transactions on Wireless Communications

    (2017)
  • J. Liu et al.

    A Scalable and Quick-Response Software Defined Vehicular Network Assisted by Mobile Edge Computing

    IEEE Communications Magazine

    (2017)
  • Cited by (0)

    View full text