ax.benchmark

Benchmark Problems

class ax.benchmark.benchmark_problem.BenchmarkProblem[source]

Bases: tuple

Contains features that describe a benchmarking problem: its name, its global optimum (maximum for maximization problems, minimum for minimization), its optimization configuration, and its search space.

Parameters
  • name – name of this problem

  • fbest – global optimum

  • optimization_config – optimization configuration

  • search_space – search space, on which this problem is defined

property fbest

Alias for field number 1

property name

Alias for field number 0

property optimization_config

Alias for field number 2

property search_space

Alias for field number 3

ax.benchmark.benchmark_problem.branin = BenchmarkProblem(name='Branin', fbest=0.397887, optimization_config=OptimizationConfig(objective=Objective(metric_name="branin_objective", minimize=True), outcome_constraints=[]), search_space=SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[-5.0, 10.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 15.0])], parameter_constraints=[]))
ax.benchmark.benchmark_problem.branin_max = BenchmarkProblem(name='Branin', fbest=294.0, optimization_config=OptimizationConfig(objective=Objective(metric_name="neg_branin", minimize=False), outcome_constraints=[]), search_space=SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[-5.0, 10.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 15.0])], parameter_constraints=[]))
ax.benchmark.benchmark_problem.hartmann6 = BenchmarkProblem(name='Hartmann6', fbest=-3.32237, optimization_config=OptimizationConfig(objective=Objective(metric_name="Hartmann6", minimize=True), outcome_constraints=[]), search_space=SearchSpace(parameters=[RangeParameter(name='x0', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]))
ax.benchmark.benchmark_problem.hartmann6_constrained = BenchmarkProblem(name='Hartmann6', fbest=-3.32237, optimization_config=OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)]), search_space=SearchSpace(parameters=[RangeParameter(name='x0', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]))

Benchmark Runner

ax.benchmark.benchmark_runner.ALLOWED_RUN_RETRIES = 5
class ax.benchmark.benchmark_runner.BOBenchmarkRunner[source]

Bases: ax.benchmark.benchmark_runner.BenchmarkRunner

run_benchmark_run(setup, generation_strategy)[source]

Run a single full benchmark run of the given problem and method combination.

Return type

BenchmarkSetup

class ax.benchmark.benchmark_runner.BanditBenchmarkRunner[source]

Bases: ax.benchmark.benchmark_runner.BenchmarkRunner

run_benchmark_run(setup, generation_strategy)[source]

Run a single full benchmark run of the given problem and method combination.

Return type

BenchmarkSetup

class ax.benchmark.benchmark_runner.BenchmarkResult(objective_at_true_best, generator_changes, optimum, fit_times, gen_times)[source]

Bases: tuple

property fit_times

Alias for field number 3

property gen_times

Alias for field number 4

property generator_changes

Alias for field number 1

property objective_at_true_best

Alias for field number 0

property optimum

Alias for field number 2

class ax.benchmark.benchmark_runner.BenchmarkRunner[source]

Bases: object

Runner that keeps track of benchmark runs and failures encountered during benchmarking.

aggregate_results()[source]

Pull results from each of the runs (BenchmarkSetups aka Experiments) and aggregate them into a BenchmarkResult for each problem.

Return type

Dict[str, BenchmarkResult]

property errors

Messages from errors encoutered while running benchmark test.

Return type

List[str]

abstract run_benchmark_run(setup, generation_strategy)[source]

Run a single full benchmark run of the given problem and method combination.

Return type

BenchmarkSetup

run_benchmark_test(setup, generation_strategy, num_runs=20, raise_all_errors=False)[source]

Run full benchmark test for the given method and problem combination. A benchmark test consists of repeated full benchmark runs.

Parameters
  • setup (BenchmarkSetup) – setup, runs on which to execute; includes a benchmarking problem, total number of iterations, etc.

  • strategy (generation) – generation strategy that defines which generation methods should be used in this benchmarking test

  • num_runs (int) – how many benchmark runs of given problem and method combination to run with the given setup for one benchmark test

Return type

Dict[Tuple[str, str, int], BenchmarkSetup]

class ax.benchmark.benchmark_runner.BenchmarkSetup(problem, total_iterations=20, batch_size=1)[source]

Bases: ax.core.experiment.Experiment

An extension of Experiment, specific to benchmarking. Contains additional data, such as the benchmarking problem, iterations to run per benchmarking method and problem combination, etc.

Parameters
  • problem (BenchmarkProblem) – description of the benchmarking problem for this setup

  • total_iterations (int) – how many optimization iterations to run

  • batch_size (int) – if this benchmark requires batch trials, batch size for those. Defaults to None

clone_reset()[source]

Create a clean copy of this benchmarking setup, with no run data attached to it.

Return type

BenchmarkSetup

ax.benchmark.benchmark_runner.PROBLEM_METHOD_DELIMETER = '_on_'
ax.benchmark.benchmark_runner.RUN_DELIMETER = '_run_'
ax.benchmark.benchmark_runner.get_model_times(setup)[source]
Return type

Tuple[float, float]

ax.benchmark.benchmark_runner.logger = <Logger ax.benchmark.benchmark_runner (INFO)>
ax.benchmark.benchmark_runner.true_best_objective(optimization_config, true_values)[source]

Compute the true best objective value found by each iteration.

Parameters
  • optimization_config (OptimizationConfig) – Optimization config

  • true_values (Dict[str, ndarray]) – Dictionary from metric name to array of value at each iteration.

Returns: Array of cumulative best feasible value.

Return type

ndarray

Benchmark Suite

class ax.benchmark.benchmark_suite.BOBenchmarkingSuite[source]

Bases: object

Suite that runs all standard Bayesian optimization benchmarks.

generate_report(include_individual=False)[source]
Return type

str

run(num_runs, total_iterations, bo_strategies, bo_problems, batch_size=1, raise_all_errors=False)[source]

Run all standard BayesOpt benchmarks.

Parameters
  • num_runs (int) – How many time to run each test.

  • total_iterations (int) – How many iterations to run each optimization for.

  • bo_strategies (List[GenerationStrategy]) – GenerationStrategies representing each method to benchmark.

  • bo_problems (List[BenchmarkProblem]) – Problems to benchmark the methods on.

  • batch_size (int) – Number of arms to be generated and evaluated in optimization at once.

  • raise_all_errors (bool) – Debugging setting; set to true if all encountered errors should be raised right away (and interrupt the benchm arking) rather than logged and recorded.

Return type

BOBenchmarkRunner

ax.benchmark.benchmark_suite.BOProblems = [BenchmarkProblem(name='Hartmann6', fbest=-3.32237, optimization_config=OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)]), search_space=SearchSpace(parameters=[RangeParameter(name='x0', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[])), BenchmarkProblem(name='Branin', fbest=294.0, optimization_config=OptimizationConfig(objective=Objective(metric_name="neg_branin", minimize=False), outcome_constraints=[]), search_space=SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[-5.0, 10.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 15.0])], parameter_constraints=[]))]
ax.benchmark.benchmark_suite.BOStrategies = [GenerationStrategy(name='Sobol', steps=[Sobol for 50 arms], generated 0 arm(s) so far), GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 5 arms, GPEI for subsequent arms], generated 0 arm(s) so far)]