ax.benchmark

Benchmark

Benchmark Problem

class ax.benchmark.benchmark_problem.BenchmarkProblem(search_space: ax.core.search_space.SearchSpace, optimization_config: ax.core.optimization_config.OptimizationConfig, name: Optional[str] = None, optimal_value: Optional[float] = None, evaluate_suggested: bool = True)[source]

Bases: ax.utils.common.base.Base

Benchmark problem, represented in terms of Ax search space and optimization config. Useful to represent complex problems that involve constaints, non- range parameters, etc.

Note: if this problem is computationally intensive, consider setting evaluate_suggested argument to False.

Parameters
  • search_space – Problem domain.

  • optimization_config – Problem objective and constraints. Note that by default, an Objective in the OptimizationConfig has minimize set to False, so by default an OptimizationConfig is that of maximization.

  • name – Optional name of the problem, will default to the name of the objective metric (e.g., “Branin” or “Branin_constrainted” if constraints are present). The name of the problem is reflected in the names of the benchmarking experiments (e.g. “Sobol_on_Branin”).

  • optimal_value – Optional target objective value for the optimization.

  • evaluate_suggested – Whether the model-predicted best value should be evaluated when benchmarking on this problem. Note that in practice, this means that for every model-generated trial, an extra point will be evaluated. This extra point is often different from the model- generated trials, since those trials aim to both explore and exploit, so the aim is not usually to suggest the current model-predicted optimum.

evaluate_suggested: bool
name: str
optimal_value: Optional[float]
optimization_config: ax.core.optimization_config.OptimizationConfig
search_space: ax.core.search_space.SearchSpace
class ax.benchmark.benchmark_problem.SimpleBenchmarkProblem(f: Union[ax.utils.measurement.synthetic_functions.SyntheticFunction, function], name: Optional[str] = None, domain: Optional[List[Tuple[float, float]]] = None, optimal_value: Optional[float] = None, minimize: bool = False, noise_sd: float = 0.0, evaluate_suggested: bool = True)[source]

Bases: ax.benchmark.benchmark_problem.BenchmarkProblem

Benchmark problem, represented in terms of simplified constructions: a callable function, a domain that consists or ranges, etc. This problem does not support parameter or outcome constraints.

Note: if this problem is computationally intensive, consider setting evaluate_suggested argument to False.

Parameters
  • f – Ax SyntheticFunction or an ad-hoc callable that evaluates points represented as nd-arrays. Input to the callable should be an (n x d) array, where n is the number of points to evaluate, and d is the dimensionality of the points. Returns a float or an (1 x n) array. Used as problem objective.

  • name – Optional name of the problem, will default to the name of the objective metric (e.g., “Branin” or “Branin_constrainted” if constraints are present). The name of the problem is reflected in the names of the benchmarking experiments (e.g. “Sobol_on_Branin”).

  • domain – Problem domain as list of tuples. Parameter names will be derived from the length of this list, as {“x1”, …, “xN”}, where N is the length of this list.

  • optimal_value – Optional target objective value for the optimization.

  • minimize – Whether this is a minimization problem, defatuls to False.

  • noise_sd – Measure of the noise that will be added to the observations during the optimization. During the evaluation phase, true values will be extracted to measure a method’s performance. Only applicable when using a known SyntetheticFunction as the f argument.

  • evaluate_suggested – Whether the model-predicted best value should be evaluated when benchmarking on this problem. Note that in practice, this means that for every model-generated trial, an extra point will be evaluated. This extra point is often different from the model- generated trials, since those trials aim to both explore and exploit, so the aim is not usually to suggest the current model-predicted optimum.

domain: List[Tuple[float, float]]
domain_as_ax_client_parameters()List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]][source]
evaluate_suggested: bool
f: Union[ax.utils.measurement.synthetic_functions.SyntheticFunction, function]
minimize: bool
name: str
noise_sd: float
optimal_value: Optional[float]

Benchmark Result

class ax.benchmark.benchmark_result.BenchmarkResult(true_performance: Dict[str, numpy.ndarray], fit_times: Dict[str, List[float]], gen_times: Dict[str, List[float]], optimum: Union[float, NoneType] = None, model_transitions: Union[Dict[str, Union[List[int], NoneType]], NoneType] = None, is_multi_objective: bool = False, pareto_frontiers: Union[Dict[str, ax.plot.pareto_utils.ParetoFrontierResults], NoneType] = None)[source]

Bases: object

fit_times: Dict[str, List[float]]
gen_times: Dict[str, List[float]]
is_multi_objective: bool = False
model_transitions: Optional[Dict[str, Optional[List[int]]]] = None
optimum: Optional[float] = None
pareto_frontiers: Optional[Dict[str, ax.plot.pareto_utils.ParetoFrontierResults]] = None
true_performance: Dict[str, numpy.ndarray]
ax.benchmark.benchmark_result.aggregate_problem_results(runs: Dict[str, List[ax.core.experiment.Experiment]], problem: ax.benchmark.benchmark_problem.BenchmarkProblem, model_transitions: Optional[Dict[str, List[int]]] = None, is_asynchronous: bool = False, **kwargs)ax.benchmark.benchmark_result.BenchmarkResult[source]
ax.benchmark.benchmark_result.extract_optimization_trace(experiment: ax.core.experiment.Experiment, problem: ax.benchmark.benchmark_problem.BenchmarkProblem, is_asynchronous: bool, **kwargs)numpy.ndarray[source]

Extract outcomes of an experiment: best cumulative objective as numpy ND- array, and total model-fitting time and candidate generation time as floats.

ax.benchmark.benchmark_result.generate_report(benchmark_results: Dict[str, ax.benchmark.benchmark_result.BenchmarkResult], errors_encountered: Optional[List[str]] = None, include_individual_method_plots: bool = False, notebook_env: bool = False)str[source]
ax.benchmark.benchmark_result.make_plots(benchmark_result: ax.benchmark.benchmark_result.BenchmarkResult, problem_name: str, include_individual: bool)List[ax.plot.base.AxPlotConfig][source]

Benchmark

Benchmark Utilities

ax.benchmark.utils.get_corresponding(value_or_matrix: Union[int, List[List[int]]], row: int, col: int)int[source]

If value_or_matrix is a matrix, extract the value in cell specified by row and col. If value_or_matrix is a scalar, just return it.

ax.benchmark.utils.get_problems_and_methods(problems: Optional[Union[List[ax.benchmark.benchmark_problem.BenchmarkProblem], List[str]]] = None, methods: Optional[Union[List[ax.modelbridge.generation_strategy.GenerationStrategy], List[str]]] = None)Tuple[List[ax.benchmark.benchmark_problem.BenchmarkProblem], List[ax.modelbridge.generation_strategy.GenerationStrategy]][source]

Validate problems and methods; find them by string keys if passed as strings.

BoTorch Methods

ax.benchmark.botorch_methods.fixed_noise_gp_model_constructor(Xs: List[torch.Tensor], Ys: List[torch.Tensor], Yvars: List[torch.Tensor], task_features: List[int], fidelity_features: List[int], metric_names: List[str], state_dict: Optional[Dict[str, torch.Tensor]] = None, refit_model: bool = True, **kwargs: Any)botorch.models.model.Model[source]
ax.benchmark.botorch_methods.make_basic_generation_strategy(name: str, acquisition: str, num_initial_trials: int = 14, surrogate_model_constructor: Callable = <function singletask_gp_model_constructor>)ax.modelbridge.generation_strategy.GenerationStrategy[source]
ax.benchmark.botorch_methods.singletask_gp_model_constructor(Xs: List[torch.Tensor], Ys: List[torch.Tensor], Yvars: List[torch.Tensor], task_features: List[int], fidelity_features: List[int], metric_names: List[str], state_dict: Optional[Dict[str, torch.Tensor]] = None, refit_model: bool = True, **kwargs: Any)botorch.models.model.Model[source]

Modular Botorch Benchmarking

Standard Methods

Standard Problems