Gradients of Acquisition Functions for Bi-objective Bayesian optimization

Kaifeng Yang, Sixuan Liu, Michael Affenzeller, Guozhi Dong

Research output: Chapter in Book/Report/Conference proceedingsConference contributionpeer-review

Abstract

Multi-objective Bayesian optimization (MOBO) searches for optimal solutions by maximizing acquisition functions to optimize expensive black-box functions globally. MOBO frequently employs the probability of improvement (POI), expected hypervolume improvement (EHVI), and truncated expected hypervolume improvement (TEHVI) acquisition functions. Based on the POI, this paper proposes the truncated probability of improvement (TPoI) that leverages prior knowledge of objective values via the truncated normal distribution. Additionally, this paper proposes explicit formulas for computing the gradient of POI, the gradient of TPoI, and the gradient of TEHVI.

Original languageEnglish
Title of host publicationICNC-FSKD 2023 - 2023 19th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery
EditorsLiang Zhao, Guanglu Sun, Kenli Li, Zheng Xiao, Lipo Wang
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350304398
DOIs
Publication statusPublished - 2023
Event19th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, ICNC-FSKD 2023 - Harbin, China
Duration: 29 Jul 202331 Jul 2023

Publication series

NameICNC-FSKD 2023 - 2023 19th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery

Conference

Conference19th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, ICNC-FSKD 2023
Country/TerritoryChina
CityHarbin
Period29.07.202331.07.2023

Keywords

  • Acquisition Function
  • Gradient
  • Multi-objective Bayesian optimization
  • Probability of Improvement
  • Truncated Expected Hypervolume Improvement

Cite this