ax.benchmark

Benchmark Problems

class ax.benchmark.benchmark_problem.BenchmarkProblem[source]

Bases: tuple

Contains features that describe a benchmarking problem: its name, its global optimum (maximum for maximization problems, minimum for minimization), its optimization configuration, and its search space.

Parameters:
  • name – name of this problem
  • fbest – global optimum
  • optimization_config – optimization configuration
  • search_space – search space, on which this problem is defined
fbest

Alias for field number 1

name

Alias for field number 0

optimization_config

Alias for field number 2

search_space

Alias for field number 3

Benchmark Runner

class ax.benchmark.benchmark_runner.BOBenchmarkRunner[source]

Bases: ax.benchmark.benchmark_runner.BenchmarkRunner

run_benchmark_run(setup, generation_strategy)[source]

Run a single full benchmark run of the given problem and method combination.

Return type:BenchmarkSetup
class ax.benchmark.benchmark_runner.BanditBenchmarkRunner[source]

Bases: ax.benchmark.benchmark_runner.BenchmarkRunner

run_benchmark_run(setup, generation_strategy)[source]

Run a single full benchmark run of the given problem and method combination.

Return type:BenchmarkSetup
class ax.benchmark.benchmark_runner.BenchmarkResult(objective_at_true_best, generator_changes, optimum, fit_times, gen_times)[source]

Bases: tuple

fit_times

Alias for field number 3

gen_times

Alias for field number 4

generator_changes

Alias for field number 1

objective_at_true_best

Alias for field number 0

optimum

Alias for field number 2

class ax.benchmark.benchmark_runner.BenchmarkRunner[source]

Bases: object

Runner that keeps track of benchmark runs and failures encountered during benchmarking.

aggregate_results()[source]

Pull results from each of the runs (BenchmarkSetups aka Experiments) and aggregate them into a BenchmarkResult for each problem.

Return type:Dict[str, BenchmarkResult]
errors

Messages from errors encoutered while running benchmark test.

Return type:List[str]
run_benchmark_run(setup, generation_strategy)[source]

Run a single full benchmark run of the given problem and method combination.

Return type:BenchmarkSetup
run_benchmark_test(setup, generation_strategy, num_runs=20, raise_all_errors=False)[source]

Run full benchmark test for the given method and problem combination. A benchmark test consists of repeated full benchmark runs.

Parameters:
  • setup (BenchmarkSetup) – setup, runs on which to execute; includes a benchmarking problem, total number of iterations, etc.
  • strategy (generation) – generation strategy that defines which generation methods should be used in this benchmarking test
  • num_runs (int) – how many benchmark runs of given problem and method combination to run with the given setup for one benchmark test
Return type:

Dict[Tuple[str, str, int], BenchmarkSetup]

class ax.benchmark.benchmark_runner.BenchmarkSetup(problem, total_iterations=20, batch_size=1)[source]

Bases: ax.core.experiment.Experiment

An extension of Experiment, specific to benchmarking. Contains additional data, such as the benchmarking problem, iterations to run per benchmarking method and problem combination, etc.

Parameters:
  • problem (BenchmarkProblem) – description of the benchmarking problem for this setup
  • total_iterations (int) – how many optimization iterations to run
  • batch_size (int) – if this benchmark requires batch trials, batch size for those. Defaults to None
clone_reset()[source]

Create a clean copy of this benchmarking setup, with no run data attached to it.

Return type:BenchmarkSetup
ax.benchmark.benchmark_runner.get_model_times(setup)[source]
Return type:Tuple[float, float]
ax.benchmark.benchmark_runner.true_best_objective(optimization_config, true_values)[source]

Compute the true best objective value found by each iteration.

Parameters:
  • optimization_config (OptimizationConfig) – Optimization config
  • true_values (Dict[str, ndarray]) – Dictionary from metric name to array of value at each iteration.

Returns: Array of cumulative best feasible value.

Return type:ndarray

Benchmark Suite

class ax.benchmark.benchmark_suite.BOBenchmarkingSuite[source]

Bases: object

Suite that runs all standard Bayesian optimization benchmarks.

generate_report(include_individual=False)[source]
Return type:str
run(num_runs, total_iterations, bo_strategies, bo_problems, batch_size=1, raise_all_errors=False)[source]

Run all standard BayesOpt benchmarks.

Parameters:
  • num_runs (int) – How many time to run each test.
  • total_iterations (int) – How many iterations to run each optimization for.
  • bo_strategies (List[GenerationStrategy]) – GenerationStrategies representing each method to benchmark.
  • bo_problems (List[BenchmarkProblem]) – Problems to benchmark the methods on.
  • batch_size (int) – Number of arms to be generated and evaluated in optimization at once.
  • raise_all_errors (bool) – Debugging setting; set to true if all encountered errors should be raised right away (and interrupt the benchm arking) rather than logged and recorded.
Return type:

BOBenchmarkRunner