ax.benchmark¶
Benchmark¶
Benchmark Method¶
- class ax.benchmark.benchmark_method.BenchmarkMethod(name: str, generation_strategy: GenerationStrategy, scheduler_options: SchedulerOptions)[source]¶
Bases:
Base
Benchmark method, represented in terms of Ax generation strategy (which tells us which models to use when) and scheduler options (which tell us extra execution information like maximum parallelism, early stopping configuration, etc.). Note: if BenchmarkMethod.scheduler_optionss.total_trials is lower than BenchmarkProblem.num_trials only the number of trials specified in the former will be run.
- generation_strategy: GenerationStrategy¶
- scheduler_options: SchedulerOptions¶
- ax.benchmark.benchmark_method.get_sequential_optimization_scheduler_options(timeout_hours: int = 4) SchedulerOptions [source]¶
The typical SchedulerOptions used in benchmarking.
- Parameters:
timeout_hours – The maximum amount of time (in hours) to run each benchmark replication. Defaults to 4 hours.
Benchmark Problem¶
- class ax.benchmark.benchmark_problem.BenchmarkProblem(name: str, search_space: SearchSpace, optimization_config: OptimizationConfig, runner: Runner, num_trials: int, infer_noise: bool, tracking_metrics: Optional[List[Metric]] = None)[source]¶
Bases:
Base
,BenchmarkProblemBase
Benchmark problem, represented in terms of Ax search space, optimization config, and runner.
- classmethod from_botorch(test_problem_class: Type[BaseTestProblem], test_problem_kwargs: Dict[str, Any], num_trials: int, infer_noise: bool = True) BenchmarkProblem [source]¶
Create a BenchmarkProblem from a BoTorch BaseTestProblem using specialized Metrics and Runners. The test problem’s result will be computed on the Runner and retrieved by the Metric.
- Parameters:
test_problem_class – The BoTorch test problem class which will be used to define the search_space, optimization_config, and runner.
test_problem_kwargs – Keyword arguments used to instantiate the test_problem_class.
num_trials – Simply the num_trials of the BenchmarkProblem created.
infer_noise – Whether noise will be inferred. This is separate from whether synthetic noise is added to the problem, which is controlled by the noise_std of the test problem.
- class ax.benchmark.benchmark_problem.BenchmarkProblemBase[source]¶
Bases:
ABC
Specifies the interface any benchmark problem must adhere to.
Subclasses include BenchmarkProblem, SurrogateBenchmarkProblem, and MOOSurrogateBenchmarkProblem.
- optimization_config: OptimizationConfig¶
- search_space: SearchSpace¶
- class ax.benchmark.benchmark_problem.MultiObjectiveBenchmarkProblem(maximum_hypervolume: float, reference_point: List[float], *, name: str, search_space: SearchSpace, optimization_config: OptimizationConfig, runner: Runner, num_trials: int, infer_noise: bool, tracking_metrics: Optional[List[Metric]] = None)[source]¶
Bases:
BenchmarkProblem
A BenchmarkProblem support multiple objectives. Rather than knowing each objective’s optimal value we track a known maximum hypervolume computed from a given reference point.
- classmethod from_botorch_multi_objective(test_problem_class: Type[MultiObjectiveTestProblem], test_problem_kwargs: Dict[str, Any], num_trials: int, infer_noise: bool = True) MultiObjectiveBenchmarkProblem [source]¶
Create a BenchmarkProblem from a BoTorch BaseTestProblem using specialized Metrics and Runners. The test problem’s result will be computed on the Runner once per trial and each Metric will retrieve its own result by index.
- class ax.benchmark.benchmark_problem.SingleObjectiveBenchmarkProblem(optimal_value: float, *, name: str, search_space: SearchSpace, optimization_config: OptimizationConfig, runner: Runner, num_trials: int, infer_noise: bool, tracking_metrics: Optional[List[Metric]] = None)[source]¶
Bases:
BenchmarkProblem
The most basic BenchmarkProblem, with a single objective and a known optimal value.
- classmethod from_botorch_synthetic(test_problem_class: Type[SyntheticTestFunction], test_problem_kwargs: Dict[str, Any], num_trials: int, infer_noise: bool = True) SingleObjectiveBenchmarkProblem [source]¶
Create a BenchmarkProblem from a BoTorch BaseTestProblem using specialized Metrics and Runners. The test problem’s result will be computed on the Runner and retrieved by the Metric.
Benchmark Result¶
- class ax.benchmark.benchmark_result.AggregatedBenchmarkResult(name: str, results: List[BenchmarkResult], optimization_trace: pandas.DataFrame, score_trace: pandas.DataFrame, fit_time: List[float], gen_time: List[float])[source]¶
Bases:
Base
The result of a benchmark test, or series of replications. Scalar data present in the BenchmarkResult is here represented as (mean, sem) pairs.
- classmethod from_benchmark_results(results: List[BenchmarkResult]) AggregatedBenchmarkResult [source]¶
Aggregrates a list of BenchmarkResults. For various reasons (timeout, errors, etc.) each BenchmarkResult may have a different number of trials; aggregated traces and statistics are computed with and truncated to the minimum trial count to ensure each replication is included.
- optimization_trace: pandas.DataFrame¶
- results: List[BenchmarkResult]¶
- score_trace: pandas.DataFrame¶
- class ax.benchmark.benchmark_result.BenchmarkResult(name: str, seed: int, optimization_trace: ndarray, score_trace: ndarray, fit_time: float, gen_time: float, experiment: Optional[Experiment] = None, experiment_storage_id: Optional[str] = None)[source]¶
Bases:
Base
The result of a single optimization loop from one (BenchmarkProblem, BenchmarkMethod) pair.
- experiment: Optional[Experiment] = None¶
- optimization_trace: ndarray¶
- score_trace: ndarray¶
Benchmark¶
Scored Benchmark¶
Benchmark Methods GPEI and MOO¶
- ax.benchmark.methods.gpei_and_moo.get_gpei_default() BenchmarkMethod [source]¶
- ax.benchmark.methods.gpei_and_moo.get_moo_default() BenchmarkMethod [source]¶
Benchmark Methods Modular BoTorch¶
- ax.benchmark.methods.modular_botorch.get_sobol_botorch_modular_acquisition(acquisition_cls: Type[AcquisitionFunction], acquisition_options: Optional[Dict[str, Any]] = None) BenchmarkMethod [source]¶
- ax.benchmark.methods.modular_botorch.get_sobol_botorch_modular_default() BenchmarkMethod [source]¶
- ax.benchmark.methods.modular_botorch.get_sobol_botorch_modular_fixed_noise_gp_qnehvi() BenchmarkMethod [source]¶
- ax.benchmark.methods.modular_botorch.get_sobol_botorch_modular_fixed_noise_gp_qnei() BenchmarkMethod [source]¶
- ax.benchmark.methods.modular_botorch.get_sobol_botorch_modular_saas_fully_bayesian_single_task_gp(botorch_acqf_class: Type[AcquisitionFunction]) BenchmarkMethod [source]¶
- ax.benchmark.methods.modular_botorch.get_sobol_botorch_modular_saas_fully_bayesian_single_task_gp_qnehvi() BenchmarkMethod [source]¶
- ax.benchmark.methods.modular_botorch.get_sobol_botorch_modular_saas_fully_bayesian_single_task_gp_qnei() BenchmarkMethod [source]¶
Benchmark Methods SAASBO¶
- ax.benchmark.methods.saasbo.get_saasbo_default() BenchmarkMethod [source]¶
- ax.benchmark.methods.saasbo.get_saasbo_moo_default() BenchmarkMethod [source]¶
Benchmark Methods Choose Generation Strategy¶
- ax.benchmark.methods.choose_generation_strategy.get_choose_generation_strategy_method(problem: BenchmarkProblemBase) BenchmarkMethod [source]¶
Benchmark Problems Registry¶
- class ax.benchmark.problems.registry.BenchmarkProblemRegistryEntry(factory_fn: Callable[..., ax.benchmark.benchmark_problem.BenchmarkProblem], factory_kwargs: Dict[str, Any])[source]¶
Bases:
object
- factory_fn: Callable[[...], BenchmarkProblem]¶
- ax.benchmark.problems.registry.get_problem(problem_name: str, **additional_kwargs: Any) BenchmarkProblem [source]¶
Benchmark Problems High Dimensional Embedding¶
- ax.benchmark.problems.hd_embedding.embed_higher_dimension(problem: BenchmarkProblem, total_dimensionality: int) BenchmarkProblem [source]¶
Return a new BenchmarkProblem with enough RangeParameter`s added to the search space to make its total dimensionality equal to `total_dimensionality and add total_dimensionality to its name.
The search space of the original problem is within the search space of the new problem, and the constraints are copied from the original problem.
Benchmark Problems Surrogate¶
- class ax.benchmark.problems.surrogate.MOOSurrogateBenchmarkProblem(*, name: str, search_space: SearchSpace, optimization_config: MultiObjectiveOptimizationConfig, num_trials: int, infer_noise: bool, maximum_hypervolume: float, reference_point: List[float], metric_names: List[str], get_surrogate_and_datasets: Optional[Callable[[], Tuple[Surrogate, List[SupervisedDataset]]]] = None, tracking_metrics: Optional[List[Metric]] = None, _runner: Optional[Runner] = None)[source]¶
Bases:
SurrogateBenchmarkProblemBase
Has the same attributes/properties as a MultiObjectiveBenchmarkProblem, but its runner is not constructed until needed, to allow for deferring constructing the surrogate.
Simple aspects of the problem problem such as its search space are defined immediately, while the surrogate is only defined when [TODO] in order to avoid expensive operations like downloading files and fitting a model.
- optimization_config: MultiObjectiveOptimizationConfig¶
- class ax.benchmark.problems.surrogate.SOOSurrogateBenchmarkProblem(*, name: str, search_space: SearchSpace, optimization_config: OptimizationConfig, num_trials: int, infer_noise: bool, optimal_value: float, metric_names: List[str], get_surrogate_and_datasets: Optional[Callable[[], Tuple[Surrogate, List[SupervisedDataset]]]] = None, tracking_metrics: Optional[List[Metric]] = None, _runner: Optional[Runner] = None)[source]¶
Bases:
SurrogateBenchmarkProblemBase
Has the same attributes/properties as a SingleObjectiveBenchmarkProblem, but allows for constructing from a surrogate.
- class ax.benchmark.problems.surrogate.SurrogateBenchmarkProblemBase(*, name: str, search_space: SearchSpace, optimization_config: OptimizationConfig, num_trials: int, infer_noise: bool, metric_names: List[str], get_surrogate_and_datasets: Optional[Callable[[], Tuple[Surrogate, List[SupervisedDataset]]]] = None, tracking_metrics: Optional[List[Metric]] = None, _runner: Optional[Runner] = None)[source]¶
Bases:
Base
,BenchmarkProblemBase
Base class for SOOSurrogateBenchmarkProblem and MOOSurrogateBenchmarkProblem.
Allows for lazy creation of objects needed to construct a runner, including a surrogate and datasets.
- class ax.benchmark.problems.surrogate.SurrogateMetric(name: str, lower_is_better: bool, infer_noise: bool = True)[source]¶
Bases:
Metric
- class ax.benchmark.problems.surrogate.SurrogateRunner(name: str, surrogate: Surrogate, datasets: List[SupervisedDataset], search_space: SearchSpace, metric_names: List[str])[source]¶
Bases:
Runner
- classmethod deserialize_init_args(args: Dict[str, Any]) Dict[str, Any] [source]¶
Given a dictionary, deserialize the properties needed to initialize the object. Used for storage.
- poll_trial_status(trials: Iterable[BaseTrial]) Dict[TrialStatus, Set[int]] [source]¶
Checks the status of any non-terminal trials and returns their indices as a mapping from TrialStatus to a list of indices. Required for runners used with Ax
Scheduler
.NOTE: Does not need to handle waiting between polling calls while trials are running; this function should just perform a single poll.
- Parameters:
trials – Trials to poll.
- Returns:
A dictionary mapping TrialStatus to a list of trial indices that have the respective status at the time of the polling. This does not need to include trials that at the time of polling already have a terminal (ABANDONED, FAILED, COMPLETED) status (but it may).
- run(trial: BaseTrial) Dict[str, Any] [source]¶
Deploys a trial based on custom runner subclass implementation.
- Parameters:
trial – The trial to deploy.
- Returns:
Dict of run metadata from the deployment process.
- classmethod serialize_init_args(obj: Any) Dict[str, Any] [source]¶
Serialize the properties needed to initialize the runner. Used for storage.
WARNING: Because of issues with consistently saving and loading BoTorch and GPyTorch modules the SurrogateRunner cannot be serialized at this time. At load time the runner will be replaced with a SyntheticRunner.
Benchmark Problems Mixed Integer Synthetic¶
Mixed integer extensions of some common synthetic test functions. These are adapted from [Daulton2022bopr].
References
S. Daulton, X. Wan, D. Eriksson, M. Balandat, M. A. Osborne, E. Bakshy. Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization. Advances in Neural Information Processing Systems 35, 2022.
- ax.benchmark.problems.synthetic.discretized.mixed_integer.get_discrete_ackley(num_trials: int = 50, infer_noise: bool = True, bounds: Optional[List[Tuple[float, float]]] = None) BenchmarkProblem [source]¶
13D Ackley problem where first 10 dimensions are discretized.
This also restricts Ackley evaluation bounds to [0, 1].
Benchmark Problems Jenatton¶
- ax.benchmark.problems.synthetic.hss.jenatton.get_jenatton_benchmark_problem(num_trials: int = 50, infer_noise: bool = True) SingleObjectiveBenchmarkProblem [source]¶
Benchmark Problems PyTorchCNN¶
- class ax.benchmark.problems.hpo.pytorch_cnn.PyTorchCNNBenchmarkProblem(optimal_value: float, *, name: str, search_space: SearchSpace, optimization_config: OptimizationConfig, runner: Runner, num_trials: int, infer_noise: bool, tracking_metrics: Optional[List[Metric]] = None)[source]¶
- class ax.benchmark.problems.hpo.pytorch_cnn.PyTorchCNNMetric(infer_noise: bool = True)[source]¶
Bases:
Metric
- class ax.benchmark.problems.hpo.pytorch_cnn.PyTorchCNNRunner(name: str, train_set: Dataset, test_set: Dataset)[source]¶
Bases:
Runner
- class CNN[source]¶
Bases:
Module
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- poll_trial_status(trials: Iterable[BaseTrial]) Dict[TrialStatus, Set[int]] [source]¶
Checks the status of any non-terminal trials and returns their indices as a mapping from TrialStatus to a list of indices. Required for runners used with Ax
Scheduler
.NOTE: Does not need to handle waiting between polling calls while trials are running; this function should just perform a single poll.
- Parameters:
trials – Trials to poll.
- Returns:
A dictionary mapping TrialStatus to a list of trial indices that have the respective status at the time of the polling. This does not need to include trials that at the time of polling already have a terminal (ABANDONED, FAILED, COMPLETED) status (but it may).
Benchmark Problems PyTorchCNN TorchVision¶
- class ax.benchmark.problems.hpo.torchvision.PyTorchCNNTorchvisionBenchmarkProblem(optimal_value: float, *, name: str, search_space: SearchSpace, optimization_config: OptimizationConfig, runner: Runner, num_trials: int, infer_noise: bool, tracking_metrics: Optional[List[Metric]] = None)[source]¶
Bases:
PyTorchCNNBenchmarkProblem
- class ax.benchmark.problems.hpo.torchvision.PyTorchCNNTorchvisionRunner(name: str, train_set: Dataset, test_set: Dataset)[source]¶
Bases:
PyTorchCNNRunner
A subclass to aid in serialization. This allows us to save only the name of the dataset and reload it from TorchVision at deserialization time.