Home  | Publications | PSM+22

YAHPO Gym - An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization

MCML Authors

Abstract

When developing and analyzing new hyperparameter optimization (HPO) methods, it is vital to empirically evaluate and compare them on well-curated benchmark suites. In this work, we list desirable properties and requirements for such benchmarks and propose a new set of challenging and relevant multifidelity HPO benchmark problems motivated by these requirements. For this, we revisit the concept of surrogate-based benchmarks and empirically compare them to more widely-used tabular benchmarks, showing that the latter ones may induce bias in performance estimation and ranking of HPO methods. We present a new surrogate-based benchmark suite for multifidelity HPO methods consisting of 9 benchmark collections that constitute over 700 multifidelity HPO problems in total. All our benchmarks also allow for querying of multiple optimization targets, enabling the benchmarking of multi-objective HPO. We examine and compare our benchmark suite with respect to the defined requirements and show that our benchmarks provide viable additions to existing suites.

inproceedings PSM+22


AutoML 2022

International Conference on Automated Machine Learning. Baltimore, MD, USA, Jul 25-27, 2022.

Authors

F. PfistererL. SchneiderJ. MoosbauerM. BinderB. Bischl

Links

URL GitHub

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: PSM+22

Back to Top