TY - GEN
T1 - Can Synthetic Data Improve Symbolic Regression Extrapolation Performance?
AU - Ramlan, Fitria Wulandari
AU - O’Riordan, Colm
AU - Kronberger, Gabriel
AU - McDermott, James
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/8/11
Y1 - 2025/8/11
N2 - Many machine learning models perform well when making predictions within the training data range, but often struggle when required to extrapolate beyond it. Symbolic regression (SR) using genetic programming (GP) can generate flexible models but is prone to unreliable behaviour in extrapolation. This paper investigates whether adding synthetic data can help improve performance in such cases. We apply Kernel Density Estimation (KDE) to identify regions in the input space where the training data is sparse. Synthetic data is then generated in those regions using a knowledge distillation approach: a teacher model generates predictions on new input points, which are then used to train a student model. We evaluate this method across six benchmark datasets, using neural networks (NN), random forests (RF), and GP both as teacher models (to generate synthetic data) and as student models (trained on the augmented data). Results show that GP models benefit most when trained with synthetic data from NN and RF. The most significant improvements are observed in extrapolation regions, while changes in interpolation areas show only slight changes. We also observe heterogeneous errors, where model performance varies across different regions of the input space. Overall, this approach offers a practical solution for better extrapolation.
AB - Many machine learning models perform well when making predictions within the training data range, but often struggle when required to extrapolate beyond it. Symbolic regression (SR) using genetic programming (GP) can generate flexible models but is prone to unreliable behaviour in extrapolation. This paper investigates whether adding synthetic data can help improve performance in such cases. We apply Kernel Density Estimation (KDE) to identify regions in the input space where the training data is sparse. Synthetic data is then generated in those regions using a knowledge distillation approach: a teacher model generates predictions on new input points, which are then used to train a student model. We evaluate this method across six benchmark datasets, using neural networks (NN), random forests (RF), and GP both as teacher models (to generate synthetic data) and as student models (trained on the augmented data). Results show that GP models benefit most when trained with synthetic data from NN and RF. The most significant improvements are observed in extrapolation regions, while changes in interpolation areas show only slight changes. We also observe heterogeneous errors, where model performance varies across different regions of the input space. Overall, this approach offers a practical solution for better extrapolation.
KW - Data Augmentation
KW - Extrapolation
KW - Genetic Programming
KW - Heterogeneous Errors
KW - Symbolic Regression
KW - Synthetic Data
UR - https://www.scopus.com/pages/publications/105014587288
U2 - 10.1145/3712255.3734356
DO - 10.1145/3712255.3734356
M3 - Conference contribution
AN - SCOPUS:105014587288
T3 - GECCO 2025 Companion - Proceedings of the 2025 Genetic and Evolutionary Computation Conference Companion
SP - 2548
EP - 2555
BT - GECCO 2025 Companion - Proceedings of the 2025 Genetic and Evolutionary Computation Conference Companion
A2 - Ochoa, Gabriela
PB - Association for Computing Machinery, Inc
T2 - 2025 Genetic and Evolutionary Computation Conference Companion, GECCO 2025 Companion
Y2 - 14 July 2025 through 18 July 2025
ER -