SRBench++: Principled Benchmarking of Symbolic Regression With Domain-Expert Interpretation

  • F. O. de Franca
  • , M. Virgolin
  • , M. Kommenda
  • , M. S. Majumder
  • , M. Cranmer
  • , G. Espada
  • , L. Ingelse
  • , A. Fonseca
  • , M. Landajuela
  • , B. Petersen
  • , R. Glatt
  • , N. Mundhenk
  • , C. S. Lee
  • , J. D. Hochhalter
  • , D. L. Randall
  • , P. Kamienny
  • , H. Zhang
  • , G. Dick
  • , A. Simon
  • , B. Burlacu
  • Jaan Kasak, Meera Machado, Casper Wilstrup, W. G.L. Cavaz

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

Symbolic regression (SR) searches for analytic expressions that accurately describe studied phenomena. The main promise of this approach is that it may return an interpretable model that can be insightful to users, while maintaining high accuracy. The current standard for benchmarking these algorithms is SRBench, which evaluates methods on hundreds of datasets that are a mix of real-world and simulated processes spanning multiple domains. At present, the ability of SRBench to evaluate interpretability is limited to measuring the size of expressions on real-world data, and the exactness of model forms on synthetic data. In practice, model size is only one of many factors used by subject experts to determine how interpretable a model truly is. Furthermore, SRBench does not characterize algorithm performance on specific, challenging subtasks of regression, such as feature selection and evasion of local minima. In this work, we propose and evaluate an approach to benchmarking SR algorithms that addresses these limitations of SRBench by 1) incorporating expert evaluations of interpretability on a domain-specific task, and 2) evaluating algorithms over distinct properties of data science tasks. We evaluate 12 modern SR algorithms on these benchmarks and present an in-depth analysis of the results, discuss current challenges of SR algorithms and highlight possible improvements for the benchmark itself.
Original languageEnglish
Article number4
Pages (from-to)1127-1137
Number of pages11
JournalIEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
Volume29
Issue number4
DOIs
Publication statusPublished - 2025

Keywords

  • Accuracy
  • Benchmark testing
  • Competition
  • Current measurement
  • Evolutionary computation
  • Interpretable Machine Learning
  • Machine learning algorithms
  • Prediction algorithms
  • Symbolic Regression
  • Task analysis
  • interpretable machine learning (ML)
  • symbolic regression (SR)

Fingerprint

Dive into the research topics of 'SRBench++: Principled Benchmarking of Symbolic Regression With Domain-Expert Interpretation'. Together they form a unique fingerprint.

Cite this