ax.benchmark¶
Benchmark¶
Benchmark Method¶
-
class
ax.benchmark.benchmark_method.
BenchmarkMethod
(name: str, generation_strategy: ax.modelbridge.generation_strategy.GenerationStrategy, scheduler_options: ax.service.utils.scheduler_options.SchedulerOptions)[source]¶ Bases:
ax.utils.common.base.Base
Benchmark method, represented in terms of Ax generation strategy (which tells us which models to use when) and scheduler options (which tell us extra execution information like maximum parallelism, early stopping configuration, etc.)
-
generation_strategy
: ax.modelbridge.generation_strategy.GenerationStrategy¶
-
scheduler_options
: ax.service.utils.scheduler_options.SchedulerOptions¶
-
Benchmark Problem¶
-
class
ax.benchmark.benchmark_problem.
BenchmarkProblem
(name: str, search_space: ax.core.search_space.SearchSpace, optimization_config: ax.core.optimization_config.OptimizationConfig, runner: ax.core.runner.Runner)[source]¶ Bases:
ax.utils.common.base.Base
Benchmark problem, represented in terms of Ax search space, optimization config, and runner.
-
classmethod
from_botorch
(test_problem: botorch.test_functions.base.BaseTestProblem) → ax.benchmark.benchmark_problem.BenchmarkProblem[source]¶ Create a BenchmarkProblem from a BoTorch BaseTestProblem using specialized Metrics and Runners. The test problem’s result will be computed on the Runner and retrieved by the Metric.
-
optimization_config
: ax.core.optimization_config.OptimizationConfig¶
-
runner
: ax.core.runner.Runner¶
-
search_space
: ax.core.search_space.SearchSpace¶
-
classmethod
-
class
ax.benchmark.benchmark_problem.
MultiObjectiveBenchmarkProblem
(name: str, search_space: ax.core.search_space.SearchSpace, optimization_config: ax.core.optimization_config.OptimizationConfig, runner: ax.core.runner.Runner, maximum_hypervolume: float, reference_point: List[float])[source]¶ Bases:
ax.benchmark.benchmark_problem.BenchmarkProblem
A BenchmarkProblem support multiple objectives. Rather than knowing each objective’s optimal value we track a known maximum hypervolume computed from a given reference point.
-
classmethod
from_botorch_multi_objective
(test_problem: botorch.test_functions.base.MultiObjectiveTestProblem) → ax.benchmark.benchmark_problem.MultiObjectiveBenchmarkProblem[source]¶ Create a BenchmarkProblem from a BoTorch BaseTestProblem using specialized Metrics and Runners. The test problem’s result will be computed on the Runner once per trial and each Metric will retrieve its own result by index.
-
classmethod
-
class
ax.benchmark.benchmark_problem.
SingleObjectiveBenchmarkProblem
(name: str, search_space: ax.core.search_space.SearchSpace, optimization_config: ax.core.optimization_config.OptimizationConfig, runner: ax.core.runner.Runner, optimal_value: float)[source]¶ Bases:
ax.benchmark.benchmark_problem.BenchmarkProblem
The most basic BenchmarkProblem, with a single objective and a known optimal value.
-
classmethod
from_botorch_synthetic
(test_problem: botorch.test_functions.synthetic.SyntheticTestFunction) → ax.benchmark.benchmark_problem.SingleObjectiveBenchmarkProblem[source]¶ Create a BenchmarkProblem from a BoTorch BaseTestProblem using specialized Metrics and Runners. The test problem’s result will be computed on the Runner and retrieved by the Metric.
-
classmethod
Benchmark Result¶
-
class
ax.benchmark.benchmark_result.
AggregatedBenchmarkResult
(name: str, experiments: List[ax.core.experiment.Experiment], optimization_trace: pandas.DataFrame, fit_time: Tuple[float, float], gen_time: Tuple[float, float])[source]¶ Bases:
ax.utils.common.base.Base
The result of a benchmark test, or series of replications. Scalar data present in the BenchmarkResult is here represented as (mean, sem) pairs. More information will be added to the AggregatedBenchmarkResult as the suite develops.
-
experiments
: List[ax.core.experiment.Experiment]¶
-
classmethod
from_benchmark_results
(results: List[ax.benchmark.benchmark_result.BenchmarkResult]) → ax.benchmark.benchmark_result.AggregatedBenchmarkResult[source]¶
-
optimization_trace
: pandas.DataFrame¶
-
-
class
ax.benchmark.benchmark_result.
BenchmarkResult
(name: str, experiment: ax.core.experiment.Experiment, optimization_trace: numpy.ndarray, fit_time: float, gen_time: float)[source]¶ Bases:
ax.utils.common.base.Base
The result of a single optimization loop from one (BenchmarkProblem, BenchmarkMethod) pair. More information will be added to the BenchmarkResult as the suite develops.
-
experiment
: ax.core.experiment.Experiment¶
-
optimization_trace
: numpy.ndarray¶
-
-
class
ax.benchmark.benchmark_result.
ScoredBenchmarkResult
(name: str, experiments: List[ax.core.experiment.Experiment], optimization_trace: pandas.DataFrame, fit_time: Tuple[float, float], gen_time: Tuple[float, float], baseline_result: ax.benchmark.benchmark_result.AggregatedBenchmarkResult, score: numpy.ndarray)[source]¶ Bases:
ax.benchmark.benchmark_result.AggregatedBenchmarkResult
An AggregatedBenchmarkResult normalized against some baseline method (for the same problem), typically Sobol. The score is calculated in such a way that 0 corresponds to performance equivalent with the baseline and 100 indicates the true optimum was found.
-
baseline_result
: ax.benchmark.benchmark_result.AggregatedBenchmarkResult¶
-
classmethod
from_result_and_baseline
(aggregated_result: ax.benchmark.benchmark_result.AggregatedBenchmarkResult, baseline_result: ax.benchmark.benchmark_result.AggregatedBenchmarkResult, optimum: float) → ax.benchmark.benchmark_result.ScoredBenchmarkResult[source]¶
-
score
: numpy.ndarray¶
-
Benchmark¶
Benchmark Methods Modular BoTorch¶
Benchmark Methods SAASBO¶
Benchmark Methods Choose Generation Strategy¶
Benchmark Problems Registry¶
-
class
ax.benchmark.problems.registry.
BenchmarkProblemRegistryEntry
(factory_fn: Callable[…, ax.benchmark.benchmark_problem.BenchmarkProblem], factory_kwargs: Dict[str, Any], baseline_results_path: str)[source]¶ Bases:
object
-
factory_fn
: Callable[[…], ax.benchmark.benchmark_problem.BenchmarkProblem]¶
-
-
ax.benchmark.problems.registry.
get_problem_and_baseline
(problem_name: str) → Tuple[ax.benchmark.benchmark_problem.BenchmarkProblem, ax.benchmark.benchmark_result.AggregatedBenchmarkResult][source]¶
Benchmark Problems High Dimensional Embedding¶
-
ax.benchmark.problems.hd_embedding.
embed_higher_dimension
(problem: ax.benchmark.benchmark_problem.BenchmarkProblem, total_dimensionality: int) → ax.benchmark.benchmark_problem.BenchmarkProblem[source]¶
Benchmark Problems PyTorchCNN¶
-
class
ax.benchmark.problems.hpo.pytorch_cnn.
PyTorchCNNBenchmarkProblem
(name: str, search_space: ax.core.search_space.SearchSpace, optimization_config: ax.core.optimization_config.OptimizationConfig, runner: ax.core.runner.Runner, optimal_value: float)[source]¶ Bases:
ax.benchmark.benchmark_problem.SingleObjectiveBenchmarkProblem
-
classmethod
from_datasets
(name: str, train_set: torch.utils.data.dataset.Dataset, test_set: torch.utils.data.dataset.Dataset) → ax.benchmark.problems.hpo.pytorch_cnn.PyTorchCNNBenchmarkProblem[source]¶
-
optimization_config
: OptimizationConfig¶
-
runner
: Runner¶
-
search_space
: SearchSpace¶
-
classmethod
-
class
ax.benchmark.problems.hpo.pytorch_cnn.
PyTorchCNNMetric
[source]¶ Bases:
ax.core.metric.Metric
-
fetch_trial_data
(trial: ax.core.base_trial.BaseTrial, **kwargs) → ax.core.data.Data[source]¶ Fetch data for one trial.
-
-
class
ax.benchmark.problems.hpo.pytorch_cnn.
PyTorchCNNRunner
(name: str, train_set: torch.utils.data.dataset.Dataset, test_set: torch.utils.data.dataset.Dataset)[source]¶ Bases:
ax.core.runner.Runner
-
class
CNN
[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
poll_trial_status
(trials: Iterable[ax.core.base_trial.BaseTrial]) → Dict[ax.core.base_trial.TrialStatus, Set[int]][source]¶ Checks the status of any non-terminal trials and returns their indices as a mapping from TrialStatus to a list of indices. Required for runners used with Ax
Scheduler
.NOTE: Does not need to handle waiting between polling calls while trials are running; this function should just perform a single poll.
- Parameters
trials – Trials to poll.
- Returns
A dictionary mapping TrialStatus to a list of trial indices that have the respective status at the time of the polling. This does not need to include trials that at the time of polling already have a terminal (ABANDONED, FAILED, COMPLETED) status (but it may).
-
run
(trial: ax.core.base_trial.BaseTrial) → Dict[str, Any][source]¶ Deploys a trial based on custom runner subclass implementation.
- Parameters
trial – The trial to deploy.
- Returns
Dict of run metadata from the deployment process.
-
class
Benchmark Problems PyTorchCNN TorchVision¶
-
class
ax.benchmark.problems.hpo.torchvision.
PyTorchCNNTorchvisionBenchmarkProblem
(name: str, search_space: ax.core.search_space.SearchSpace, optimization_config: ax.core.optimization_config.OptimizationConfig, runner: ax.core.runner.Runner, optimal_value: float)[source]¶ Bases:
ax.benchmark.problems.hpo.pytorch_cnn.PyTorchCNNBenchmarkProblem
-
classmethod
from_dataset_name
(name: str) → ax.benchmark.problems.hpo.torchvision.PyTorchCNNTorchvisionBenchmarkProblem[source]¶
-
optimization_config
: OptimizationConfig¶
-
runner
: Runner¶
-
search_space
: SearchSpace¶
-
classmethod
-
class
ax.benchmark.problems.hpo.torchvision.
PyTorchCNNTorchvisionRunner
(name: str, train_set: torch.utils.data.dataset.Dataset, test_set: torch.utils.data.dataset.Dataset)[source]¶ Bases:
ax.benchmark.problems.hpo.pytorch_cnn.PyTorchCNNRunner
A subclass to aid in serialization. This allows us to save only the name of the dataset and reload it from TorchVision at deserialization time.
-
classmethod
deserialize_init_args
(args: Dict[str, Any]) → Dict[str, Any][source]¶ Given a dictionary, deserialize the properties needed to initialize the runner. Used for storage.
-
classmethod
serialize_init_args
(runner: ax.core.runner.Runner) → Dict[str, Any][source]¶ Serialize the properties needed to initialize the runner. Used for storage.
-
classmethod