Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control

2021 
The goal of traffic signal control is to coordinate multiple traffic signals to improve the traffic efficiency of a district or a city. In this work, we propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method, and aim to learn the decentralized polices of each traffic signal only conditioned on its local observation. MetaVIM makes three novel contributions. Firstly, to make the model available to new unseen target scenarios, we formulate the traffic signal control as a meta-learning problem over a set of related tasks. The train scenario is divided as multiple partially observable Markov decision process (POMDP) tasks, and each task corresponds to a traffic light. In each task, the neighbours are regarded as an unobserved part of the state. Secondly, we assume that the reward, transition and policy functions vary across different tasks but share a common structure, and a learned latent variable conditioned on the past trajectories is proposed for each task to represent the specific information of the current task in these functions, then is further brought into policy for automatically trade off between exploration and exploitation to induce the RL agent to choose the reasonable action. In addition, to make the policy learning stable, four decoders are introduced to predict the received observations and rewards of the current agent with/without neighbour agents' policies, and a novel intrinsic reward is designed to encourage the received observation and reward invariant to the neighbour agents. Empirically, extensive experiments conducted on CityFlow demonstrate that the proposed method substantially outperforms existing methods and shows superior generalizability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    1
    Citations
    NaN
    KQI
    []