MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing

https://doi.org/10.1016/j.jpdc.2022.04.013Get rights and content

Highlights

  • A task offloading approach in Internet of Vehicles (IoV) is proposed.

  • An MDP-based Intelligent Big Data Processing (MIDP) Scheme is proposed.

  • Simulations demonstrate the better performances of MIDP scheme.

Abstract

The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme.

Introduction

There is an explosive growth in devices, with more than 20 billion devices connected to the Internet of Things (IoT) in 2020 [38], and the rate of growth is accelerating. The amount of data generated every day reaches 2.5 trillion bytes. The massive data generated by these devices has promoted the development of big data [7], [21], and the current IoT devices have significantly improved in terms of computing power and storage capacity [45], [47], [53]. But at the same time, mobile applications are becoming more and more diverse and complex, causing an increasing demand of computing, storage and network [61], [54], [36]. Many computationally resource-intensive applications, such as virtual reality [57], augmented reality [28], [6], pattern recognition [29], [50], and mobile health, generate large amounts of data. Big data processing exceeds the processing capacity of IoT devices, causing users to wait for a long time [32], [56], thus degrading the user experience [40], [59]. With the wide application of computing resource-intensive technologies such as machine learning and artificial intelligence processing technologies in various applications, the gap between the computing power of user terminal equipment and application requirements is still large [61], [60], [33]. Task offloading in mobile edge computing (MEC) offloads tasks containing large amounts of data to the server or cloud, and this big data processing provides an excellent solution for such problems [16], [11], [31]. On the one hand, MEC shifts computing related to big data from the cloud to the edge where a large number of network devices are deployed [39], [17], [26]. Data is sensed and acquired at the edge of network [30], [27], [15], so placing computation at the edge eliminates the long paths, high energy consumption and increased cloud load required to upload big data to the cloud [34], [51], [23]. Users who need big data processing results are also at the edge of network [8], [46], [42], which also allows users to obtain results without network latency and jitter, and user experience is improved [20], [18], [5]. On the other hand, it solves the problem of weak big data processing capacity by offloading computationally intensive tasks from IoT devices to surrounding edge servers [1], [55], [37]. Therefore, task offloading in MEC has become a hot issue in current big data processing research [2], [48], [41].

The Internet of Vehicles (IoV) has been widely used as a typical application scenario for MEC in big data networks [61], [31], [4]. In IoV, when Moving Vehicles (MVs) pass the communication range of Road Side Units (RSUs), they can offload their tasks to edge servers, which are connected to the RSUs through Vehicle-to-Infrastructures (V2I), thus fulfilling the task offloading [31]. Task offloading can effectively reduce the computational load of moving vehicles, minimizing energy consumption and the time required for task execution, which is crucial for big data processing related to time-critical tasks [61], [31].

There is already some research on task offloading in IoV [61], [31]. These researches focus on how mobile vehicles can make brilliant task offloading decisions to optimize system performance and the efficiency of big data processing. One of the most classic studies is how mobile vehicles choose between local task execution and offloading to edge servers [61], [31]. The vehicle can eliminate the time required to upload a large amount of data to the edge servers (called transmission delay) and the energy consumption. Still, the computation takes a long time due to the weak computational power of MVs [61], [31]. Although uploading to the edge server requires cost and time, the computation time (called computing delay) is shorter due to the high computational power of edge servers, which increases the time of task uploading. Still, the total completion time may be smaller, and so the total big data processing time is smaller too [61], [31]. Thus, current research focuses on deciding whether to task offload when MVs have tasks. And there are two ways of task offloading. One is 0/1 task offloading mode. In such a mode, the task is taken as a whole, either all offloading or local execution [2]. In non-0/1 task offloading mode, the task can be split into any proportion and offload part to edge servers and keep part of it for local execution [2].

Subsequent research has further extended the scope and selectable destinations of task offloading. For example, Liu et al. [31] proposed An Adaptive Task Offloading Algorithm (ATOA) for Time-critical Tasks in Heterogeneous Internet of Vehicles (IoV). In their task offloading model, three different objects can compute tasks, i.e., local MVs, edge servers, and cloud [31]. Thus, there are more options for offloading destinations in making offloading decisions than previously discussed research [31]. The first is that the tasks are executed locally. In this way, the local MVs are computationally weak and thus take a long time to compute tasks; the second is to offload tasks to edge servers. In this way, since the computational power of edge servers is much stronger than that of MVs, the computational delay is significantly reduced compared to local computation [31]. Although it increases the task uploading time, the task offloading to edge servers uses the faster V2I method of communication. Therefore, the transmissions delay required for the upload task is small, so this offloading can potentially reduce the time required for task completion; The third is to offload tasks to the cloud [31]. Cloud has infinite computational power, so the computational delay for executing tasks is minimal. However, the upload task's vehicle-to-Cloud (V2C) communication method is slower than V2I and thus requires a slightly higher transmission delay. Therefore, the problem that the ATOA algorithm solved is to make an intelligent decision to decide where to offload the tasks generated by MVs.

Previous research mainly focused on those networks which is based on the assumption that the destination of task offloading is determined. Therefore, what task offloading has to do is to decide where to offload to maximize the efficiency of big data processing. However, in practice, the destination to which vehicles can offload tasks is constantly changing, especially in IoV. In IoV, tasks of mobile vehicles are generated randomly during the movement. Because of this mobility of MVs, the edge servers that MVs can offload to are constantly changing. The MVs, to begin with, are uncertain about where are the edge servers, and their computing power and storage capacity. Thus, task offloading becomes more complicated in this uncertain situation. First, when the MVs move to the communication range of the current edge server, the task offloading decision at this stage is equivalent to the previous study [31], that is, to decide whether offloading or local execution. However, in the actual IoV, this task offloading decision is only part of the overall decision. The actual network is where numerous RSUs are deployed on the road, and the RSUs are connected to edge servers. These edge servers are connected to the network. However, since different service providers may deploy edge servers, the servers do not share resources due to the competitive relationship between different providers. Thus, vehicles transmit tasks to edge servers, and edge servers execute big data processing and calculation without sharing the computational resources. This is similar to Ref. [16]. What makes the situation even worse is that vehicles do not know in advance which edge servers are available on the MVs' mobile road and do not know the computing power of these edge servers [16], [31]. Thus, when MVs decide to offload tasks to current edge servers, the current computing capacity of the edge server may be relatively small, resulting in a large computing delay and total data processing delay. And when MVs decide not to offload to the current edge server but to move forward to find the next better edge server to offload, they move to the next edge server with an additional opportunity cost (i.e., searching delay). There are two possibilities at this point, and a good possibility is that MVs find the computing capacity of the edge server large. If the task is offloaded at this point, MVs can make the task completion time potentially shorter than the completion time of offloading to the current edge servers, even if the searching delay is increased. In this case, it is wise for MVs not to offload to the current edge server but to find the next more optimized edge server that can improve the efficiency of big data processing. But it's also possible that the next edge server will have even less computing power. Or although it has more computational power, its increased computational power does not compensate for its increased searching delay by reducing the computing delay; thus, making MVs offload to the current edge servers is wise. Therefore, at each edge server, MVs have two options. One is task offloading to the current edge server. The other is not to offload to the current edge server but to keep looking for the next stronger edge server to offload. However, since MVs do not know when and where the next edge server will be available, the searching delay (i.e., opportunity cost) is constantly increasing. Thus, this problem can be likened to the asset selling problem. The basic asset selling problem [4], [43] can be described as follows: The seller has an asset hoping to sell at a high price. These multiple suppliers that arrive independent and identically distributed (iid) over discrete time slots with a bid [4], [43]. When the suppliers arrive and bid, the seller has to decide whether to take this bid or wait for future ones [4], [43]. The seller has to pay a cost to observe the next bid. Previous bids cannot be recalled, which is similar to that MVs cannot offload tasks to edge servers that have passed. The decision process ends with the seller choosing a bid or reaching a deadline as in Ref. [22], [9].

The task offloading issue in IoV is similar to the asset selling problem [4], [43]. The MVs with a task are similar to an asset selling problem, and the task is generated at time 0. It is also when the decision-making process begins [4], [43]. The edge servers that vehicles can offload during the MVs move are equivalent to suppliers. The computing capacity of the edge server is equal to the bid of the supplier. Every time MVs encounter an edge server that MVs can offload to, MVs have to decide whether to offload the task to this edge server or wait for the next edge server. The MVs need to pay search delay as a cost to search the next edge server. When MVs choose an edge server to offload tasks, or the deadline exceeds, the entire decision process ends. Since MVs do not know if there are edge servers to offload tasks in front of them while driving. Therefore, it is challenging to make smart decisions for MVs that only have a local view. But previous task offloading did not take this type of problem into account. We find that in this type of task offloading. However, the number of available edge servers and their computational power is uncertain at a specific time, the distribution of edge servers in a specific network is regular. For example, the number of edge servers on the roadside in a busy city will be much greater than in remote areas. Thus, in this case, MVs can be more demanding in choosing edge servers. Therefore, there are many edge servers for data processing to choose from when MVs are moving, and the cost of selection is relatively low. But when MVs are in the remote area, the number of edge servers on the roadside is very low, so when MVs are not choosing an edge server, the cost of encountering the next edge server (i.e., search delay) is very high. Thus, the condition of MVs to choose edge servers is relaxed. This situation allows MVs to efficiently select better edge servers for data processing in different networks. In this way, even when MVs do not know which edge servers are available on the roadside, it is still effective to guide MVs in choosing better edge servers if MVs know the overall situation. Thus, this paper proposes learning knowledge in the environment by Machine Learning (ML) for the first time. Statistical rules guide task offloading of MVs. Even if MVs have only a local view, the optimization results can be achieved based on the experience of the previous MVs, thus improving the total efficiency of big data processing. To summarize, the main innovations of this work are as follows:

(1) A task offloading approach in IoV based on the combination of global environmental awareness and the local view is proposed. The actual task offloading in IoV is not just a decision to execute locally or offload to an edge server which is much studied in previous research. What is more important is that offload to the current edge server or search and try to offload to the next edge server. Thus, the complexity of this type of task offloading is much greater than in the previous study. MVs have only a local view and do not know when and how the computing capacity of the next edger server; thus, it is difficult to make a smart offloading decision. To solve such an issue. In this paper, we propose an intelligent learning approach to learn the environment of roadside to obtain the overall task offloading environment, then combine the local view of MVs to make a smart offloading decision. This scheme gives better results than the previous strategy of task offloading with only MVs local view.

(2) An MDP-based Intelligence Big Data Processing (MIDP) Scheme for Vehicular Edge Computing is proposed for MVs to make smart decisions. We first formulate the task offloading problem as an MDP. The state set, action set, reward model, and transfer probability distribution are defined and analyzed. Then, the Asynchronous Advantage Actor-Critic(A3C) algorithm solves the MDP problem. Constructed the elements of the A3C algorithm, used Actor (strategy function) to generate two actions of vehicles: offloading, moving without offloading, Critic (value function) evaluates the behavior of Actor, guide the Actor's action in subsequent stages. The actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. Minimize the completion time of task offloading through learning.

(3) Finally, extensive experiments have shown that the proposed MIDP performs better. Compared to the Immediately Offload (IO) scheme, Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme.

The rest of this paper is organized as follows. The related works are given in Section 2. The system model and problem statements are presented in Section 3. In Section 4, An MDP-based Intelligence Big Data Processing (MIDP) Scheme for Vehicular Edge Computing is proposed. The experimental results are given in Section 5. We conclude in Section 6.

Section snippets

Related works

Task offloading has become an important research issue in current edge computing and big data [60], [6], [16], [11], [31], [37], [2], [48]. The essence of task offloading is a difference in computing power and storage capacity between different devices in the network. And there is a mismatch between the handling capacity of the devices and the capacity required by the task. Because different tasks need to be processed on the devices [12], resulting in a load imbalance between the network

Network model

Vehicle Equipment (VE) usually has some local computing power. Still, when computationally intensive tasks arrive, it is difficult for vehicles with limited local computing resources to complete the computation of these tasks quickly. Thus, there is a need to consider offloading these computationally intensive tasks to Mobile Edge Computing (MEC) servers. MEC servers usually have higher computing power. So MEC servers have many computing resources that are well suited to handle computationally

Research motivation

In the traditional vehicle edge computing network, if a vehicle has computationally intensive tasks to process. The vehicle usually offloads the computationally intensive tasks to a nearby MEC server for processing. However, when a task is generated, if the vehicle offloads the task to the nearest MEC server, the vehicle can't offload the task to another MEC server. These servers may be able to provide more computing resources to the vehicle since the vehicle cannot make a comprehensive

Experiment setup

In order to evaluate the proposed MIDP scheme, the MIDP scheme is compared with two schemes, the Immediate Offload (IO) scheme and the Expect Offload (EO) scheme, respectively. In the IO scheme, the vehicle immediately offloads the generated tasks to the MEC server in its communication range when the task is generated. In the EO scheme, the tasks are given an expectation value, and the vehicle will predict the total delay before offloading tasks to the MEC server in the current communication

Conclusion and future work

In this paper, the MIDP Scheme is proposed to guide vehicles for task offloading, which models the task offloading problem involving MEC server in the vehicle networks as a complete MDP, and solves the optimal policy problem in the constructed MDP by the A3C algorithm with deep reinforcement learning. In the MIDP scheme, the vehicle that firstly generates the task performs exploratory learning. And subsequent vehicles can offload the task under the optimal policy based on their existing

Declaration of Competing Interest

We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled, “MIDP: An MDP-based Intelligent Big Data Processing Scheme for Vehicular Edge Computing”.

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 62072475, No. 61772554).

Shun Liu is currently pursuing the master's degree with the School of Computer Science and Engineering, Central South University, China. His research interests include Internet of Things, edge computing and wireless sensor network. E-mail: [email protected]

References (61)

  • Y. Liu et al.

    Artificial intelligence aware and security-enhanced trace-back technique in mobile edge computing

    Comput. Commun.

    (2020)
  • J. Luo et al.

    QoE-driven computation offloading for edge computing

    J. Syst. Archit.

    (2019)
  • Y. Ouyang et al.

    A verifiable trust evaluation mechanism for ultra-reliable applications in 5G and beyond networks

    Comput. Stand. Interfaces

    (2021)
  • M. Sakaguchi

    Dynamic programming of some sequential sampling design

    J. Math. Anal. Appl.

    (1961)
  • X. Chen

    Decentralized computation offloading game for mobile cloud computing

    IEEE Trans. Parallel Distrib. Syst.

    (2015)
  • M. Chen et al.

    Edge intelligence computing for mobile augmented reality with deep reinforcement learning approach

    Comput. Netw.

    (2021)
  • N. Cheng et al.

    Opportunistic WiFi offloading in vehicular environment: a game-theory approach

    IEEE Trans. Intell. Transp. Syst.

    (2016)
  • N. Cheng et al.

    Space/aerial-assisted computing offloading for IoT applications: a learning-based approach

    IEEE J. Sel. Areas Commun.

    (2019)
  • J. Gui et al.

    Stabilizing transmission capacity in millimeter wave links by Q-learning-based scheme

    Mob. Inf. Syst.

    (2020)
  • H. Guo et al.

    UAV-enhanced intelligent offloading for Internet of things at the edge

    IEEE Trans. Ind. Inform.

    (2019)
  • H. Harb et al.

    Energy-efficient sensor data collection approach for industrial processing monitoring

    IEEE Trans. Ind. Inform.

    (2018)
  • A. Hekmati et al.

    Optimal mobile computation offloading with hard deadline constraints

    IEEE Trans. Mob. Comput.

    (2020)
  • K. Huang et al.

    An efficient intrusion detection approach for visual sensor networks based on traffic pattern learning

    IEEE Trans. Syst. Man Cybern.

    (2017)
  • S. Huang et al.

    Joint mobile vehicle-UAV scheme for secure data collection in a smart city

    Ann. Telecommun.

    (2020)
  • M. Huang et al.

    An AUV-assisted data gathering scheme based on clustering and matrix completion for smart ocean

    IEEE Int. Things J.

    (2020)
  • M. Huang et al.

    An UAV-assisted ubiquitous trust communication system in 5G and beyond networks

    IEEE J. Sel. Areas Commun.

    (2021)
  • S. Huang et al.

    BD-VTE: a novel baseline data based verifiable trust evaluation scheme for smart network systems

    IEEE Trans. Netw. Sci. Eng.

    (2021)
  • S. Huang et al.

    An intelligent collaboration trust interconnections system for mobile information control in ubiquitous 5G networks

    IEEE Trans. Netw. Sci. Eng.

    (2021)
  • S. Karlin

    Stochastic models and optimal policy for selling an asset

    Stud. Appl. Probab. Manag. Sci.

    (1962)
  • Cited by (4)

    • Application Research of Edge Computing in Condition Monitoring of Construction Equipment

      2022, 2022 IEEE International Conference on Advances in Electrical Engineering and Computer Applications, AEECA 2022

    Shun Liu is currently pursuing the master's degree with the School of Computer Science and Engineering, Central South University, China. His research interests include Internet of Things, edge computing and wireless sensor network. E-mail: [email protected]

    Qiang Yang received his master degree in the school of software, Central South University, China, in 2018. He is currently pursuing his Ph.D. degree in the school of Computer Science and Engineering, Central South University, China. His research interests include edge computing. E-mail: [email protected]

    Shaobo Zhang received the B.Sc. and M.Sc. degree in computer science both from Hunan University of Science and Technology, Xiangtan, China, in 2003 and 2009 respectively, and the Ph.D. degree in computer science from Central South University, Changsha, China, in 2017. He is currently an associate professor at School of Computer Science and Engineering of the Hunan University of Science and Technology, China. His research interests include privacy and security issues in social networks and cloud computing. E-mail: [email protected].

    Tian Wang received his BSc and MSc degrees in Computer Science from Central South University in 2004 and 2007. He received his Ph.D. degree at the City University of Hong Kong in 2011. Currently, he is a professor at the Artificial Intelligence and Future Networks, Beijing Normal University & UIC, China. His research interests include the internet of things, edge computing, and mobile computing. E-mail: [email protected]

    Neal N. Xiong is current a Distinguished Professor at National Engineering Research Center for E-Learning, Central China Normal University (CCNU), Wuhan, Hu Bei Province, 430079, China. He is also with the Department of Computer Science, Georgia State University, Atlanta, GA 30302, USA. He received his PhD degree in School of Information Science, Japan Advanced Institute of Science and Technology (JAIST) on March 1, 2008. His research interests include Deep Learning, Reliable Networks, Software Engineering, and Big Data Analytics.

    Dr. Xiong works in CCNU for many years, and obtained many research funding and many industrial projects. He also creates a company about design and analysis for complex reliable software systems, and obtains over 10 patents. E-mail: [email protected].

    View full text