Active and Reactive Power Coordinated Control of Active Distribution Networks Based on Prioritized Reinforcement Learning

2021 
With the increasing penetration of renewable energy, voltage violation and network loss problems have hindered the secure and efficient operation of active distribution networks (ADN). Therefore, properly utilizing the reactive power of the inverter-based energy resources is inevitable. However, in the ADNs, active and reactive power (P&Q) are coupled both in the capacity and in the power flow. Moreover, the models of complicated ADNs are hard to maintain since the operation budget is limited and environment varies all the time. Hence, we develop an active and reactive power coordinated control of ADNs based on reinforcement learning (RL), which learns the control policy from interactions between the controller and the ADN. Since the RL-based method is designed for online application, our proposed method utilizes prioritized experience replay to promote efficiency and optimality. Comparing with traditional P&Q coordinated control methods, our method does not require the accurate model of ADN, but can achieve near-optimal state with high sample efficiency. Numerical simulations not only demonstrate the superiority on eliminating voltage violation, reducing network loss and maximizing the system economy, but also show the improvement of our proposed method comparing with existing RL-based methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []