TY - GEN
T1 - Genetic Programming Enabled Evolution of Control Policies for Dynamic Stochastic Optimal Power Flow
AU - Hutterer, Stephan
AU - Vonolfen, Stefan
AU - Affenzeller, Michael
PY - 2013
Y1 - 2013
N2 - The optimal power flow (OPF) is one of the central optimization problems in power grid engineering, building an essential tool for numerous control as well as planning issues. Methods for solving the OPF that mainly treat steady-state situations have been studied extensively, ignoring uncertainties of system variables as well as their volatile behavior. While both the economical as well as well as technical importance of accurate control is high, especially for power flow control in dynamic and uncertain power systems, methods are needed that provide (near-) optimal actions quickly, eliminating issues on convergence speed or robustness of the optimization. This paper shows an approximate policy-based control approach where optimal actions are derived from policies that are learned offline, but that later provide quick and accurate control actions in volatile situations. These policies are evolved using genetic programming, where multiple and interdependent policies are learned synchronously with simulation-based optimization. Finally, an approach is available for learning fast and robust power flow control policies suitable to highly dynamic power systems such as smart electric grids.
AB - The optimal power flow (OPF) is one of the central optimization problems in power grid engineering, building an essential tool for numerous control as well as planning issues. Methods for solving the OPF that mainly treat steady-state situations have been studied extensively, ignoring uncertainties of system variables as well as their volatile behavior. While both the economical as well as well as technical importance of accurate control is high, especially for power flow control in dynamic and uncertain power systems, methods are needed that provide (near-) optimal actions quickly, eliminating issues on convergence speed or robustness of the optimization. This paper shows an approximate policy-based control approach where optimal actions are derived from policies that are learned offline, but that later provide quick and accurate control actions in volatile situations. These policies are evolved using genetic programming, where multiple and interdependent policies are learned synchronously with simulation-based optimization. Finally, an approach is available for learning fast and robust power flow control policies suitable to highly dynamic power systems such as smart electric grids.
KW - Dynamic stochastic optimal power flow
KW - Policy learning
KW - Simulation optimization
UR - http://www.scopus.com/inward/record.url?scp=84882337878&partnerID=8YFLogxK
U2 - 10.1145/2464576.2482732
DO - 10.1145/2464576.2482732
M3 - Conference contribution
SN - 9781450319645
T3 - GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference Companion
SP - 1529
EP - 1536
BT - GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference Companion
PB - ACM Sigevo
T2 - Genetic and Evolutionary Computation Conference
Y2 - 6 July 2013 through 10 July 2013
ER -