DRL-based Low-Latency Content Delivery for 6G Massive Vehicular IoT

2021 
Vehicle-to-everything communication is an indispensable component of 6G networks that could help to facilitate future transportation systems. However, massive vehicles and unstable vehicle-to-vehicle (V2V) links may become bottlenecks for the low-latency delivery of contents, such as safety-critical emergency messages and multimedia. Instead of resolving the problem in a centralized way, we propose a massive vehicular Internet of Things (IoT) system and investigate the approach that would enable each vehicle to decide the transmission mode from three modes, i.e., vehicle-to-network (V2N), vehicle-to-infrastructure (V2I) and V2V sidelinks, and wireless resources. Specifically, a multi-agent deep reinforcement learning framework is formulated by combining the multi-agent reinforcement learning approach, WoLF-PHC, with the techniques from deep Q-learning (DQN) to gain the formulated framework the capability of capturing the effects of interaction between learning agents and states of complex environment. The framework is set to maximize the throughput of vehicles while maintaining the latency and reliability constraints of the vehicle communication links. However, it could be easily extended to other objectives. The simulation results demonstrate that the proposed approach outperforms the compared ones in total traffic capacity and satisfaction rate of the vehicles in communication.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []