Collaborative Computation Offloading and Resource Allocation in Multi-UAV Assisted IoT Networks: A Deep Reinforcement Learning Approach

2021 
In the fifth-generation (5G) wireless networks, Edge Internet of Things (EIoT) devices are envisioned to generate huge amounts of data. Due to the limitation of computation capacity and battery life of devices, all tasks cannot be processed by these devices. However, Mobile Edge Computing (MEC) is a very promising solution enabling offloading of tasks to nearby MEC servers to improve quality of service (QoS). Also, during emergency situations in areas where network failure exists, Unmanned Aerial Vehicles (UAVs) can be deployed to restore the network by acting as Aerial Base Stations and computational nodes for the edge network. In this paper, we consider a central network controller (CNC) who trains observations and broadcasts the trained data to a multi-UAV cluster network. Each UAV cluster head (UCH) acts as an agent and autonomously allocates resources to EIoT devices in a decentralized fashion. We propose model-free deep reinforcement learning (DRL) based collaborative computation offloading and resource allocation (CCORA-DRL) scheme in an aerial to ground (A2G) network for emergency situations, which can control the continuous action space. Each agent learns efficient computation offloading policies independently in the network and checks the statuses of the UAVs through Jain’s Fairness index. The objective is minimizing task execution delay and energy consumption and acquiring an efficient solution by adaptive learning from the dynamic A2G network. Simulation results reveal that our scheme through deep deterministic policy gradient (DDPG), effectively learns the optimal policy, outperforming A3C, DQN and greedy based offloading for local computation in stochastic dynamic environments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    6
    Citations
    NaN
    KQI
    []