Reinforcement Learning Based Mobile Offloading for Edge Computing Against Jamming And Interference

2020 
Mobile edge computing systems help improve the performance of computational-intensive applications on mobile devices and have to resist jamming attacks and heavy interference. In this paper, we present a reinforcement learning based mobile offloading scheme for edge computing against jamming attacks and interference, which uses safe reinforcement learning to avoid choosing the risky offloading policy that fails to meet the computational latency requirements of the tasks. This scheme enables the mobile device to choose the edge device, the transmit power and the offloading rate to improve its utility including the sharing gain, the computational latency, the energy consumption and the signal-to-interference-plus-noise ratio of the offloading signals without knowing the task generation model, the edge computing model, and the jamming/interference model. We also design a deep reinforcement learning based mobile offloading for edge computing that uses an actor network to choose the offloading policy and a critic network to update the actor network weights to improve the computational performance. We discuss the computational complexity and provide the performance bound that consists of the computational latency and the energy consumption based on the Nash equilibrium of the mobile offloading game. Simulation results show that this scheme can reduce the computational latency and save energy consumption.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    19
    Citations
    NaN
    KQI
    []