Deep Reinforcement Learning-Based Offloading Scheduling for Vehicular Edge Computing

2020 
Vehicular edge computing (VEC) is a new computing paradigm that has great potential to enhance the capability of vehicle terminals (VT) to support resource-hungry in-vehicle applications with low latency and high energy efficiency. In this paper, we investigate an important computation offloading scheduling problem in a typical VEC scenario, where a VT traveling along an expressway intends to schedule its tasks waiting in the queue to minimize the long-term cost in terms of a trade-off between task latency and energy consumption. Due to diverse task characteristics, dynamic wireless environment, and frequent handover events caused by vehicle movements, an optimal solution should take into account both where to schedule (i.e., local computation or offloading) and when to schedule (i.e., the order and time for execution) each task. To solve such a complicated stochastic optimization problem, we model it by a carefully designed Markov decision process (MDP) and resort to deep reinforcement learning (DRL) to deal with the enormous state space. Our DRL implementation is designed based on the state-of-the-art proximal policy optimization (PPO) algorithm. A parameter-shared network architecture combined with a convolutional neural network (CNN) is utilized to approximate both policy and value function, which can effectively extract representative features. A series of adjustments to the state and reward representations are taken to further improve the training efficiency. Extensive simulation experiments and comprehensive comparisons with six known baseline algorithms and their heuristic combinations clearly demonstrate the advantages of the proposed DRL-based offloading scheduling method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    34
    Citations
    NaN
    KQI
    []