ax.benchmark¶
Benchmark Problems¶
-
class
ax.benchmark.benchmark_problem.
BenchmarkProblem
[source]¶ Bases:
tuple
Contains features that describe a benchmarking problem: its name, its global optimum (maximum for maximization problems, minimum for minimization), its optimization configuration, and its search space.
- Parameters
name – name of this problem
fbest – global optimum
optimization_config – optimization configuration
search_space – search space, on which this problem is defined
-
property
fbest
¶ Alias for field number 1
-
property
name
¶ Alias for field number 0
-
property
optimization_config
¶ Alias for field number 2
-
property
search_space
¶ Alias for field number 3
Benchmark Runner¶
-
class
ax.benchmark.benchmark_runner.
BOBenchmarkRunner
[source]¶
-
class
ax.benchmark.benchmark_runner.
BanditBenchmarkRunner
[source]¶
-
class
ax.benchmark.benchmark_runner.
BenchmarkResult
(objective_at_true_best, generator_changes, optimum, fit_times, gen_times)[source]¶ Bases:
tuple
-
property
fit_times
¶ Alias for field number 3
-
property
gen_times
¶ Alias for field number 4
-
property
generator_changes
¶ Alias for field number 1
-
property
objective_at_true_best
¶ Alias for field number 0
-
property
optimum
¶ Alias for field number 2
-
property
-
class
ax.benchmark.benchmark_runner.
BenchmarkRunner
[source]¶ Bases:
object
Runner that keeps track of benchmark runs and failures encountered during benchmarking.
-
aggregate_results
()[source]¶ Pull results from each of the runs (BenchmarkSetups aka Experiments) and aggregate them into a BenchmarkResult for each problem.
- Return type
-
property
errors
¶ Messages from errors encoutered while running benchmark test.
-
abstract
run_benchmark_run
(setup, generation_strategy)[source]¶ Run a single full benchmark run of the given problem and method combination.
- Return type
-
run_benchmark_test
(setup, generation_strategy, num_runs=20, raise_all_errors=False)[source]¶ Run full benchmark test for the given method and problem combination. A benchmark test consists of repeated full benchmark runs.
- Parameters
setup (
BenchmarkSetup
) – setup, runs on which to execute; includes a benchmarking problem, total number of iterations, etc.strategy (generation) – generation strategy that defines which generation methods should be used in this benchmarking test
num_runs (
int
) – how many benchmark runs of given problem and method combination to run with the given setup for one benchmark test
- Return type
-
-
class
ax.benchmark.benchmark_runner.
BenchmarkSetup
(problem, total_iterations=20, batch_size=1)[source]¶ Bases:
ax.core.experiment.Experiment
An extension of Experiment, specific to benchmarking. Contains additional data, such as the benchmarking problem, iterations to run per benchmarking method and problem combination, etc.
- Parameters
problem (
BenchmarkProblem
) – description of the benchmarking problem for this setuptotal_iterations (
int
) – how many optimization iterations to runbatch_size (
int
) – if this benchmark requires batch trials, batch size for those. Defaults to None
-
clone_reset
()[source]¶ Create a clean copy of this benchmarking setup, with no run data attached to it.
- Return type
-
ax.benchmark.benchmark_runner.
true_best_objective
(optimization_config, true_values)[source]¶ Compute the true best objective value found by each iteration.
- Parameters
optimization_config (
OptimizationConfig
) – Optimization configtrue_values (
Dict
[str
,ndarray
]) – Dictionary from metric name to array of value at each iteration.
Returns: Array of cumulative best feasible value.
- Return type
ndarray
Benchmark Suite¶
-
class
ax.benchmark.benchmark_suite.
BOBenchmarkingSuite
[source]¶ Bases:
object
Suite that runs all standard Bayesian optimization benchmarks.
-
add_run
(setup, strategy_name)[source]¶ Add a run (BenchmarkSetup) to the benchmark results.
- Parameters
setup (
BenchmarkSetup
) – Run to addstrategy_name (
str
) – Name of strategy used for this run
- Return type
None
-
run
(num_runs, total_iterations, bo_strategies, bo_problems, batch_size=1, raise_all_errors=False)[source]¶ Run all standard BayesOpt benchmarks.
- Parameters
num_runs (
int
) – How many time to run each test.total_iterations (
int
) – How many iterations to run each optimization for.bo_strategies (
List
[GenerationStrategy
]) – GenerationStrategies representing each method to benchmark.bo_problems (
List
[BenchmarkProblem
]) – Problems to benchmark the methods on.batch_size (
int
) – Number of arms to be generated and evaluated in optimization at once.raise_all_errors (
bool
) – Debugging setting; set to true if all encountered errors should be raised right away (and interrupt the benchm arking) rather than logged and recorded.
- Return type
-