An Actor-Critic Deep Reinforcement Learning Based Computation Offloading for Three-Tier Mobile Computing Networks

2019 
In this paper, we consider a three-tier mobile computing network architecture consisting of a user equipment(UE), edge computing servers and a cloud server where computing tasks could be executed locally, by edge computing servers or by cloud server respectively. In order to achieve lower average task's latency and energy consumption, we minimize the weighted summation of the average task's delay and energy consumption by optimizing the task's offloading decision. Since the numbers and attributes of tasks and environment states are stochastic, it is difficult to obtain an effective policy in such a dynamic networked system. Reinforcement learning can make decisions to optimize long-term reward by interacting with dynamic environment. Therefore, we propose an optimization framework based on deep reinforcement learning (DRL) to solve the problem of computation offloading. Specifically, an actor-critic enabled DRL framework is employed to solve the problem due to the fact that DRL is difficult to converge and unstable. The simulation results demonstrate that the proposed method can achieve a certain improvement in performance and efficiency compared to other baselines.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []