Reinforcement Learning Based Microgrid Energy Trading With A Reduced Power Plant Schedule

2019 
With dynamic renewable energy generation and power demand, microgrids (MGs) exchange energy with each other to reduce their dependence on power plants. In this article, we present a reinforcement learning (RL)-based MG energy trading scheme to choose the electric energy trading policy according to the predicted future renewable energy generation, the estimated future power demand, and the MG battery level. This scheme designs a deep RL-based energy trading algorithm to address the supply–demand mismatch problem for a smart grid with a large number of MGs without relying on the renewable energy generation and power demand models of other MGs. A performance bound on the MG utility and dependence on the power plant is provided. Simulation results based on a smart grid with three MGs using wind speed data from Hong Kong Observation and electricity prices from ISO New England show that this scheme significantly reduces the average power plant schedule and thus increases the MG utility in comparison with a benchmark methodology.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    34
    Citations
    NaN
    KQI
    []