Adaptive Coordination Offsets for Signalized Arterial Intersections using Deep Reinforcement Learning.

2020 
One of the most critical components of an urban transportation system is the coordination of intersections in arterial networks. With the advent of data-driven approaches for traffic control systems, deep reinforcement learning (RL) has gained significant traction in traffic control research. Proposed deep RL solutions to traffic control are designed to directly modify either phase order or timings; such approaches can lead to unfair situations -- bypassing low volume links for several cycles -- in the name of optimizing traffic flow. To address the issues and feasibility of the present approach, we propose a deep RL framework that dynamically adjusts the offsets based on traffic states and preserves the planned phase timings and order derived from model-based methods. This framework allows us to improve arterial coordination while preserving the notion of fairness for competing streams of traffic in an intersection. Using a validated and calibrated traffic model, we trained the policy of a deep RL agent that aims to reduce travel delays in the network. We evaluated the resulting policy by comparing its performance against the phase offsets obtained by a state-of-the-practice baseline, SYNCHRO. The resulting policy dynamically readjusts phase offsets in response to changes in traffic demand. Simulation results show that the proposed deep RL agent outperformed SYNCHRO on average, effectively reducing delay time by 13.21% in the AM Scenario, 2.42% in the noon scenario, and 6.2% in the PM scenario. Finally, we also show the robustness of our agent to extreme traffic conditions, such as demand surges and localized traffic incidents.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    1
    Citations
    NaN
    KQI
    []