TY - GEN
T1 - Effects of constant optimization by nonlinear least squares minimization in symbolic regression
AU - Kommenda, Michael
AU - Kronberger, Gabriel
AU - Winkler, Stephan
AU - Affenzeller, Michael
AU - Wagner, Stefan
N1 - Copyright:
Copyright 2013 Elsevier B.V., All rights reserved.
PY - 2013
Y1 - 2013
N2 - In this publication a constant optimization approach for symbolic regression is introduced to separate the task of finding the correct model structure from the necessity to evolve the correct numerical constants. A gradient-based nonlinear least squares optimization algorithm, the Levenberg- Marquardt (LM) algorithm, is used for adjusting constant values in symbolic expression trees during their evolution. The LM algorithm depends on gradient information consisting of partial derivations of the trees, which are obtained by automatic differentiation. The presented constant optimization approach is tested on several benchmark problems and compared to a standard genetic programming algorithm to show its effectiveness. Although the constant optimization involves an overhead regarding the execution time, the achieved accuracy increases significantly as well as the ability of genetic programming to learn from provided data. As an example, the Pagie-1 problem could be solved in 37 out of 50 test runs, whereas without constant optimization it was solved in only 10 runs. Furthermore, different configurations of the constant optimization approach (number of iterations, probability of applying constant optimization) are evaluated and their impact is detailed in the results section.
AB - In this publication a constant optimization approach for symbolic regression is introduced to separate the task of finding the correct model structure from the necessity to evolve the correct numerical constants. A gradient-based nonlinear least squares optimization algorithm, the Levenberg- Marquardt (LM) algorithm, is used for adjusting constant values in symbolic expression trees during their evolution. The LM algorithm depends on gradient information consisting of partial derivations of the trees, which are obtained by automatic differentiation. The presented constant optimization approach is tested on several benchmark problems and compared to a standard genetic programming algorithm to show its effectiveness. Although the constant optimization involves an overhead regarding the execution time, the achieved accuracy increases significantly as well as the ability of genetic programming to learn from provided data. As an example, the Pagie-1 problem could be solved in 37 out of 50 test runs, whereas without constant optimization it was solved in only 10 runs. Furthermore, different configurations of the constant optimization approach (number of iterations, probability of applying constant optimization) are evaluated and their impact is detailed in the results section.
KW - Automatic differentation
KW - Constant optimization
KW - Nonlinear least squares optimization
KW - Symbolic regression
UR - http://www.scopus.com/inward/record.url?scp=84882318582&partnerID=8YFLogxK
U2 - 10.1145/2464576.2482691
DO - 10.1145/2464576.2482691
M3 - Conference contribution
SN - 9781450319645
T3 - GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference Companion
SP - 1121
EP - 1128
BT - GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference Companion
PB - ACM Sigevo
T2 - 15th Annual Conference on Genetic and Evolutionary Computation, GECCO 2013
Y2 - 6 July 2013 through 10 July 2013
ER -