Real-Time Energy Management of Microgrid System Based on Policy-Based Reinforcement Learning
Electric power grids are changing from traditional power systems to modern smart integrated power systems. The microgrid (MG) plays a vital role in this process through the integration of distributed renewable energy resources (RESs) and energy storage systems (ESSs). However, wind and solar radiation are characterized by intermittence, randomness and fluctuation. These uncertainties cannot be ignored in microgrids with high penetration of RES and random load demand, which challenges the effective and economic management of MGs. To achieve real-time economically optimal scheduling of microgrids, we proposed a real-time energy management strategy. Unlike traditional model-based approaches, this strategy is learning-based and does not need the knowledge of system uncertainties. The energy management problem of a microgrid is formulated as a Markov decision process (MDP) solved using policybased deep reinforcement learning (DRL). In the proposed RL-based approach, two different neural networks are used. One takes the real-time system state of the microgrid as inputs and makes the decision to obtain the optimal control policy
Year of publication: |
[2022]
|
---|---|
Authors: | Liu, Ding ; Zang, Chuanzhi ; Zeng, Peng ; Li, Wanting ; Wang, Xin ; Liu, Yuqi ; Xu, Shuqing |
Publisher: |
[S.l.] : SSRN |
Saved in:
Saved in favorites
Similar items by person
-
Liu, Yu-Qi, (2021)
-
Macroeconomic conditions in the US and congressional voting on environmental policy : 1970 - 2008
Tanger, Shaun M., (2011)
-
Fourier methods for estimating the central subspace and the central mean subspace in regression
Zhu, Michael, (2006)
- More ...