ax.service

Ax Client

Managed Loop

class ax.service.managed_loop.OptimizationLoop(experiment: Experiment, evaluation_function: Union[Callable[[Dict[str, Union[None, str, bool, float, int]]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Union[None, str, bool, float, int]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]], Callable[[Dict[str, Union[None, str, bool, float, int]], Optional[float]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Union[None, str, bool, float, int]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]]], total_trials: int = 20, arms_per_trial: int = 1, random_seed: Optional[int] = None, wait_time: int = 0, run_async: bool = False, generation_strategy: Optional[GenerationStrategy] = None)[source]

Bases: object

Managed optimization loop, in which Ax oversees deployment of trials and gathering data.

full_run() OptimizationLoop[source]

Runs full optimization loop as defined in the provided optimization plan.

get_best_point() Tuple[Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]][source]

Obtains the best point encountered in the course of this optimization.

get_current_model() Optional[ModelBridge][source]

Obtain the most recently used model in optimization.

run_trial() None[source]

Run a single step of the optimization plan.

static with_evaluation_function(parameters: List[Dict[str, Union[None, str, bool, float, int, Sequence[Union[None, str, bool, float, int]], Dict[str, List[str]]]]], evaluation_function: Union[Callable[[Dict[str, Union[None, str, bool, float, int]]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Union[None, str, bool, float, int]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]], Callable[[Dict[str, Union[None, str, bool, float, int]], Optional[float]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Union[None, str, bool, float, int]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]]], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, wait_time: int = 0, random_seed: Optional[int] = None, generation_strategy: Optional[GenerationStrategy] = None) OptimizationLoop[source]

Constructs a synchronous OptimizationLoop using an evaluation function.

classmethod with_runners_and_metrics(parameters: List[Dict[str, Union[None, str, bool, float, int, Sequence[Union[None, str, bool, float, int]], Dict[str, List[str]]]]], path_to_runner: str, paths_to_metrics: List[str], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, wait_time: int = 0, random_seed: Optional[int] = None) OptimizationLoop[source]

Constructs an asynchronous OptimizationLoop using Ax runners and metrics.

ax.service.managed_loop.optimize(parameters: List[Dict[str, Union[None, str, bool, float, int, Sequence[Union[None, str, bool, float, int]], Dict[str, List[str]]]]], evaluation_function: Union[Callable[[Dict[str, Union[None, str, bool, float, int]]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Union[None, str, bool, float, int]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]], Callable[[Dict[str, Union[None, str, bool, float, int]], Optional[float]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Union[None, str, bool, float, int]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]]], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, random_seed: Optional[int] = None, generation_strategy: Optional[GenerationStrategy] = None) Tuple[Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]], Experiment, Optional[ModelBridge]][source]

Construct and run a full optimization loop.

Interactive Loop

Scheduler

class ax.service.utils.scheduler_options.SchedulerOptions(max_pending_trials: int = 10, trial_type: ~ax.service.utils.scheduler_options.TrialType = TrialType.TRIAL, batch_size: ~typing.Optional[int] = None, total_trials: ~typing.Optional[int] = None, tolerated_trial_failure_rate: float = 0.5, min_failed_trials_for_failure_rate_check: int = 5, log_filepath: ~typing.Optional[str] = None, logging_level: int = 20, ttl_seconds_for_trials: ~typing.Optional[int] = None, init_seconds_between_polls: ~typing.Optional[int] = 1, min_seconds_before_poll: float = 1.0, seconds_between_polls_backoff_factor: float = 1.5, timeout_hours: ~typing.Optional[float] = None, run_trials_in_batches: bool = False, debug_log_run_metadata: bool = False, early_stopping_strategy: ~typing.Optional[~ax.early_stopping.strategies.base.BaseEarlyStoppingStrategy] = None, global_stopping_strategy: ~typing.Optional[~ax.global_stopping.strategies.base.BaseGlobalStoppingStrategy] = None, suppress_storage_errors_after_retries: bool = False, wait_for_running_trials: bool = True, fetch_kwargs: ~typing.Dict[str, ~typing.Any] = <factory>, validate_metrics: bool = True, status_quo_weight: float = 0.0, enforce_immutable_search_space_and_opt_config: bool = True)[source]

Bases: object

Settings for a scheduler instance.

max_pending_trials

Maximum number of pending trials the scheduler can have STAGED or RUNNING at once, required. If looking to use Runner.poll_available_capacity as a primary guide for how many trials should be pending at a given time, set this limit to a high number, as an upper bound on number of trials that should not be exceeded.

Type:

int

trial_type

Type of trials (1-arm Trial or multi-arm Batch Trial) that will be deployed using the scheduler. Defaults to 1-arm Trial. NOTE: use BatchTrial only if need to evaluate multiple arms together, e.g. in an A/B-test influenced by data nonstationarity. For cases where just deploying multiple arms at once is beneficial but the trials are evaluated independently, implement run_trials method in scheduler subclass, to deploy multiple 1-arm trials at the same time.

Type:

ax.service.utils.scheduler_options.TrialType

batch_size

If using BatchTrial the number of arms to be generated and deployed per trial.

Type:

Optional[int]

total_trials

Limit on number of trials a given Scheduler should run. If no stopping criteria are implemented on a given scheduler, exhaustion of this number of trials will be used as default stopping criterion in Scheduler.run_all_trials. Required to be non-null if using Scheduler.run_all_trials (not required for Scheduler.run_n_trials).

Type:

Optional[int]

tolerated_trial_failure_rate

Fraction of trials in this optimization that are allowed to fail without the whole optimization ending. Expects value between 0 and 1. NOTE: Failure rate checks begin once min_failed_trials_for_failure_rate_check trials have failed; after that point if the ratio of failed trials to total trials ran so far exceeds the failure rate, the optimization will halt.

Type:

float

min_failed_trials_for_failure_rate_check

The minimum number of trials that must fail in Scheduler in order to start checking failure rate.

Type:

int

log_filepath

File, to which to write optimization logs.

Type:

Optional[str]

logging_level

Minimum level of logging statements to log, defaults to logging.INFO.

Type:

int

ttl_seconds_for_trials

Optional TTL for all trials created within this Scheduler, in seconds. Trials that remain RUNNING for more than their TTL seconds will be marked FAILED once the TTL elapses and may be re-suggested by the Ax optimization models.

Type:

Optional[int]

init_seconds_between_polls

Initial wait between rounds of polling, in seconds. Relevant if using the default wait- for-completed-runs functionality of the base Scheduler (if wait_for_completed_trials_and_report_results is not overridden). With the default waiting, every time a poll returns that no trial evaluations completed, wait time will increase; once some completed trial evaluations are found, it will reset back to this value. Specify 0 to not introduce any wait between polls.

Type:

Optional[int]

min_seconds_before_poll

Minimum number of seconds between beginning to run a trial and the first poll to check trial status.

Type:

float

timeout_hours

Number of hours after which the optimization will abort.

Type:

Optional[float]

seconds_between_polls_backoff_factor

The rate at which the poll interval increases.

Type:

float

run_trials_in_batches

If True and poll_available_capacity is implemented to return non-null results, trials will be dispatched in groups via run_trials instead of one-by-one via run_trial. This allows to save time, IO calls or computation in cases where dispatching trials in groups is more efficient then sequential deployment. The size of the groups will be determined as the minimum of self.poll_available_capacity() and the number of generator runs that the generation strategy is able to produce without more data or reaching its allowed max paralellism limit.

Type:

bool

debug_log_run_metadata

Whether to log run_metadata for debugging purposes.

Type:

bool

early_stopping_strategy

A BaseEarlyStoppingStrategy that determines whether a trial should be stopped given the current state of the experiment. Used in should_stop_trials_early.

Type:

Optional[ax.early_stopping.strategies.base.BaseEarlyStoppingStrategy]

global_stopping_strategy

A BaseGlobalStoppingStrategy that determines whether the full optimization should be stopped or not.

Type:

Optional[ax.global_stopping.strategies.base.BaseGlobalStoppingStrategy]

suppress_storage_errors_after_retries

Whether to fully suppress SQL storage-related errors if encountered, after retrying the call multiple times. Only use if SQL storage is not important for the given use case, since this will only log, but not raise, an exception if it’s encountered while saving to DB or loading from it.

Type:

bool

wait_for_running_trials

Whether the scheduler should wait for running trials or exit.

Type:

bool

fetch_kwargs

Kwargs to be used when fetching data.

Type:

Dict[str, Any]

validate_metrics

Whether to raise an error if there is a problem with the metrics attached to the experiment.

Type:

bool

status_quo_weight

The weight of the status quo arm. This is only used if the scheduler is using a BatchTrial. This requires that the status_quo be set on the experiment.

Type:

float

enforce_immutable_search_space_and_opt_config

Whether to enforce that the search space and optimization config are immutable. If true, will add “immutable_search_space_and_opt_config”: True to experiment properties

Type:

bool

batch_size: Optional[int] = None
debug_log_run_metadata: bool = False
early_stopping_strategy: Optional[BaseEarlyStoppingStrategy] = None
enforce_immutable_search_space_and_opt_config: bool = True
fetch_kwargs: Dict[str, Any]
global_stopping_strategy: Optional[BaseGlobalStoppingStrategy] = None
init_seconds_between_polls: Optional[int] = 1
log_filepath: Optional[str] = None
logging_level: int = 20
max_pending_trials: int = 10
min_failed_trials_for_failure_rate_check: int = 5
min_seconds_before_poll: float = 1.0
run_trials_in_batches: bool = False
seconds_between_polls_backoff_factor: float = 1.5
status_quo_weight: float = 0.0
suppress_storage_errors_after_retries: bool = False
timeout_hours: Optional[float] = None
tolerated_trial_failure_rate: float = 0.5
total_trials: Optional[int] = None
trial_type: TrialType = 0
ttl_seconds_for_trials: Optional[int] = None
validate_metrics: bool = True
wait_for_running_trials: bool = True
class ax.service.utils.scheduler_options.TrialType(value)[source]

Bases: Enum

An enumeration.

BATCH_TRIAL = 1
TRIAL = 0

Utils

Best Point Identification

class ax.service.utils.best_point_mixin.BestPointMixin[source]

Bases: object

get_best_parameters(optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) Optional[Tuple[Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Identifies the best parameterization tried in the experiment so far.

First attempts to do so with the model used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.

NOTE: TModelPredictArm is of the form:

({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})

Parameters:
  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

  • use_model_predictions – Whether to extract the best point using model predictions or directly observed values. If True, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.

Returns:

Tuple of parameterization and model predictions for it.

abstract get_best_trial(optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) Optional[Tuple[int, Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Identifies the best parameterization tried in the experiment so far.

First attempts to do so with the model used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.

NOTE: TModelPredictArm is of the form:

({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})

Parameters:
  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

  • use_model_predictions – Whether to extract the best point using model predictions or directly observed values. If True, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.

Returns:

Tuple of trial index, parameterization and model predictions for it.

abstract get_hypervolume(optimization_config: Optional[MultiObjectiveOptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) float[source]

Calculate hypervolume of a pareto frontier based on either the posterior means of given observation features or observed data.

Parameters:
  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

  • use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If True, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.

abstract get_pareto_optimal_parameters(optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) Optional[Dict[int, Tuple[Dict[str, Union[None, str, bool, float, int]], Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Identifies the best parameterizations tried in the experiment so far, using model predictions if use_model_predictions is true and using observed values from the experiment otherwise. By default, uses model predictions to account for observation noise.

NOTE: The format of this method’s output is as follows: { trial_index –> (parameterization, (means, covariances) }, where means are a dictionary of form { metric_name –> metric_mean } and covariances are a nested dictionary of form { one_metric_name –> { another_metric_name: covariance } }.

Parameters:
  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

  • use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If True, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.

Returns:

None if it was not possible to extract the Pareto frontier, otherwise a mapping from trial index to the tuple of: - the parameterization of the arm in that trial, - two-item tuple of metric means dictionary and covariance matrix

(model-predicted if use_model_predictions=True and observed otherwise).

abstract get_trace() List[float][source]

Get the optimization trace of the given experiment.

The output is equivalent to calling _get_hypervolume or _get_best_trial repeatedly, with an increasing sequence of trial_indices and with use_model_predictions = False, though this does it more efficiently.

Parameters:
  • experiment – The experiment to get the trace for.

  • optimization_config – An optional optimization config to use for computing the trace. This allows computing the traces under different objectives or constraints without having to modify the experiment.

Returns:

A list of observed hypervolumes or best values.

abstract get_trace_by_progression(bins: Optional[List[float]] = None, final_progression_only: bool = False) Tuple[List[float], List[float]][source]

Get the optimization trace with respect to trial progressions instead of trial_indices (which is the behavior used in get_trace). Note that this method does not take into account the parallelism of trials and essentially assumes that trials are run one after another, in the sense that it considers the total number of progressions “used” at the end of trial k to be the cumulative progressions “used” in trials 0,…,k. This method assumes that the final value of a particular trial is used and does not take the best value of a trial over its progressions.

The best observed value is computed at each value in bins (see below for details). If bins is not supplied, the method defaults to a heuristic of approximately NUM_BINS_PER_TRIAL per trial, where each trial is assumed to run until maximum progression (inferred from the data).

Parameters:
  • experiment – The experiment to get the trace for.

  • optimization_config – An optional optimization config to use for computing the trace. This allows computing the traces under different objectives or constraints without having to modify the experiment.

  • bins – A list progression values at which to calculate the best observed value. The best observed value at bins[i] is defined as the value observed in trials 0,…,j where j = largest trial such that the total progression in trials 0,…,j is less than bins[i].

  • final_progression_only – If True, considers the value of the last step to be the value of the trial. If False, considers the best along the curve to be the value of the trial.

Returns:

A tuple containing (1) the list of observed hypervolumes or best values and (2) a list of associated x-values (i.e., progressions) useful for plotting.

ax.service.utils.best_point.extract_Y_from_data(experiment: Experiment, metric_names: List[str], data: Optional[Data] = None) Tuple[Tensor, Tensor][source]

Converts the experiment observation data into a tensor.

NOTE: This requires block design for observations. It will error out if any trial is missing data for any of the given metrics or if the data is missing the trial_index.

Parameters:
  • experiment – The experiment to extract the data from.

  • metric_names – List of metric names to extract data for.

  • data – An optional Data object to use instead of the experiment data. Note that the experiment must have a corresponding COMPLETED or EARLY_STOPPED trial for each trial_index in the data.

Returns:

A two-element Tuple containing a tensor of observed metrics and a tensor of trial_indices.

ax.service.utils.best_point.fill_missing_thresholds_from_nadir(experiment: Experiment, optimization_config: OptimizationConfig) List[ObjectiveThreshold][source]

Get the objective thresholds from the optimization config and fill the missing thresholds based on the nadir point.

Parameters:
  • experiment – The experiment, whose data is used to calculate the nadir point.

  • optimization_config – Optimization config to get the objective thresholds and the objective directions from.

Returns:

A list of objective thresholds, one for each objective in optimization config.

ax.service.utils.best_point.get_best_by_raw_objective(experiment: Experiment, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Given an experiment, identifies the arm that had the best raw objective, based on the data fetched from the experiment.

TModelPredictArm is of the form:

({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})

Parameters:
  • experiment – Experiment, on which to identify best raw objective arm.

  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

Returns:

Tuple of parameterization, and model predictions for it.

ax.service.utils.best_point.get_best_by_raw_objective_with_trial_index(experiment: Experiment, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[int, Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Given an experiment, identifies the arm that had the best raw objective, based on the data fetched from the experiment.

TModelPredictArm is of the form:

({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})

Parameters:
  • experiment – Experiment, on which to identify best raw objective arm.

  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

Returns:

Tuple of trial index, parameterization, and model predictions for it.

ax.service.utils.best_point.get_best_parameters(experiment: Experiment, models_enum: Type[ModelRegistryBase], optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Given an experiment, identifies the best arm.

First attempts according to do so with models used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.

TModelPredictArm is of the form:

({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})

Parameters:
  • experiment – Experiment, on which to identify best raw objective arm.

  • models_enum – Registry of all models that may be in the experiment’s generation strategy.

  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

Returns:

Tuple of parameterization and model predictions for it.

ax.service.utils.best_point.get_best_parameters_from_model_predictions(experiment: Experiment, models_enum: Type[ModelRegistryBase], trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Given an experiment, returns the best predicted parameterization and corresponding prediction based on the most recent Trial with predictions. If no trials have predictions returns None.

Only some models return predictions. For instance GPEI does while Sobol does not.

TModelPredictArm is of the form:

({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})

Parameters:
  • experiment – Experiment, on which to identify best raw objective arm.

  • models_enum – Registry of all models that may be in the experiment’s generation strategy.

  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

Returns:

Tuple of parameterization and model predictions for it.

ax.service.utils.best_point.get_best_parameters_from_model_predictions_with_trial_index(experiment: Experiment, models_enum: Type[ModelRegistryBase], optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[int, Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Given an experiment, returns the best predicted parameterization and corresponding prediction based on the most recent Trial with predictions. If no trials have predictions returns None.

Only some models return predictions. For instance GPEI does while Sobol does not.

TModelPredictArm is of the form:

({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})

Parameters:
  • experiment – Experiment, on which to identify best raw objective arm.

  • models_enum – Registry of all models that may be in the experiment’s generation strategy.

  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

Returns:

Tuple of trial index, parameterization, and model predictions for it.

ax.service.utils.best_point.get_best_parameters_with_trial_index(experiment: Experiment, models_enum: Type[ModelRegistryBase], optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[int, Dict[str, Union[None, str, bool, float, int]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]

Given an experiment, identifies the best arm.

First attempts according to do so with models used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.

TModelPredictArm is of the form:

({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})

Parameters:
  • experiment – Experiment, on which to identify best raw objective arm.

  • models_enum – Registry of all models that may be in the experiment’s generation strategy.

  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

Returns:

Tuple of trial index, parameterization, and model predictions for it.

ax.service.utils.best_point.get_best_raw_objective_point(experiment: Experiment, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Tuple[Dict[str, Union[None, str, bool, float, int]], Dict[str, Tuple[float, float]]][source]
ax.service.utils.best_point.get_best_raw_objective_point_with_trial_index(experiment: Experiment, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Tuple[int, Dict[str, Union[None, str, bool, float, int]], Dict[str, Tuple[float, float]]][source]

Given an experiment, identifies the arm that had the best raw objective, based on the data fetched from the experiment.

Parameters:
  • experiment – Experiment, on which to identify best raw objective arm.

  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

Returns:

Tuple of parameterization and a mapping from metric name to a tuple of

the corresponding objective mean and SEM.

ax.service.utils.best_point.get_pareto_optimal_parameters(experiment: Experiment, generation_strategy: GenerationStrategy, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) Dict[int, Tuple[Dict[str, Union[None, str, bool, float, int]], Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]][source]

Identifies the best parameterizations tried in the experiment so far, using model predictions if use_model_predictions is true and using observed values from the experiment otherwise. By default, uses model predictions to account for observation noise.

NOTE: The format of this method’s output is as follows: { trial_index –> (parameterization, (means, covariances) }, where means are a dictionary of form { metric_name –> metric_mean } and covariances are a nested dictionary of form { one_metric_name –> { another_metric_name: covariance } }.

Parameters:
  • experiment – Experiment, from which to find Pareto-optimal arms.

  • generation_strategy – Generation strategy containing the modelbridge.

  • optimization_config – Optimization config to use in place of the one stored on the experiment.

  • trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.

  • use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If True, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.

Returns:

  • the parameterization of the arm in that trial,

  • two-item tuple of metric means dictionary and covariance matrix

    (model-predicted if use_model_predictions=True and observed otherwise).

Return type:

A mapping from trial index to the tuple of

Instantiation

class ax.service.utils.instantiation.FixedFeatures(parameters: Dict[str, Union[None, str, bool, float, int]], trial_index: Optional[int] = None)[source]

Bases: object

Class for representing fixed features via the Service API.

parameters: Dict[str, Union[None, str, bool, float, int]]
trial_index: Optional[int] = None
class ax.service.utils.instantiation.InstantiationBase[source]

Bases: object

This is a lightweight stateless class that bundles together instantiation utils. It is used both on its own and as a mixin to AxClient, with the intent that these methods can be overridden by its subclasses for specific use cases.

static build_objective_threshold(objective: str, objective_properties: ObjectiveProperties) str[source]

Constructs constraint string for an objective threshold interpretable by make_experiment()

Parameters:
  • objective – Name of the objective

  • objective_properties – Object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.

classmethod build_objective_thresholds(objectives: Dict[str, ObjectiveProperties]) List[str][source]

Construct a list of constraint string for an objective thresholds interpretable by make_experiment()

Parameters:

objectives – Mapping of name of the objective to Object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.

static constraint_from_str(representation: str, parameters: Dict[str, Parameter]) ParameterConstraint[source]

Parse string representation of a parameter constraint.

classmethod make_experiment(parameters: List[Dict[str, Union[None, str, bool, float, int, Sequence[Union[None, str, bool, float, int]], Dict[str, List[str]]]]], name: Optional[str] = None, description: Optional[str] = None, owners: Optional[List[str]] = None, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, status_quo: Optional[Dict[str, Union[None, str, bool, float, int]]] = None, experiment_type: Optional[str] = None, tracking_metric_names: Optional[List[str]] = None, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None, objectives: Optional[Dict[str, str]] = None, objective_thresholds: Optional[List[str]] = None, support_intermediate_data: bool = False, immutable_search_space_and_opt_config: bool = True, is_test: bool = False) Experiment[source]

Instantiation wrapper that allows for Ax Experiment creation without importing or instantiating any Ax classes.

Parameters:
  • parameters – List of dictionaries representing parameters in the experiment search space. Required elements in the dictionaries are: 1. “name” (name of parameter, string), 2. “type” (type of parameter: “range”, “fixed”, or “choice”, string), and one of the following: 3a. “bounds” for range parameters (list of two values, lower bound first), 3b. “values” for choice parameters (list of values), or 3c. “value” for fixed parameters (single value). Optional elements are: 1. “log_scale” (for float-valued range parameters, bool), 2. “value_type” (to specify type that values of this parameter should take; expects “float”, “int”, “bool” or “str”), 3. “is_fidelity” (bool) and “target_value” (float) for fidelity parameters, 4. “is_ordered” (bool) for choice parameters, 5. “is_task” (bool) for task parameters, and 6. “digits” (int) for float-valued range parameters.

  • name – Name of the experiment to be created.

  • parameter_constraints – List of string representation of parameter constraints, such as “x3 >= x4” or “-x3 + 2*x4 - 3.5*x5 >= 2”. For the latter constraints, any number of arguments is accepted, and acceptable operators are “<=” and “>=”.

  • outcome_constraints – List of string representation of outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”

  • status_quo – Parameterization of the current state of the system. If set, this will be added to each trial to be evaluated alongside test configurations.

  • experiment_type – String indicating type of the experiment (e.g. name of a product in which it is used), if any.

  • tracking_metric_names – Names of additional tracking metrics not used for optimization.

  • objectives – Mapping from an objective name to “minimize” or “maximize” representing the direction for that objective.

  • objective_thresholds – A list of objective threshold constraints for multi- objective optimization, in the same string format as outcome_constraints argument.

  • support_intermediate_data – Whether trials may report metrics results for incomplete runs.

  • immutable_search_space_and_opt_config – Whether it’s possible to update the search space and optimization config on this experiment after creation. Defaults to True. If set to True, we won’t store or load copies of the search space and optimization config on each generator run, which will improve storage performance.

  • is_test – Whether this experiment will be a test experiment (useful for marking test experiments in storage etc). Defaults to False.

  • metric_definitions – A mapping of metric names to extra kwargs to pass to that metric

static make_fixed_observation_features(fixed_features: FixedFeatures) ObservationFeatures[source]

Construct ObservationFeatures from FixedFeatures.

Parameters:

fixed_features – The fixed features for generation.

Returns:

The new ObservationFeatures object.

classmethod make_objective_thresholds(objective_thresholds: List[str], status_quo_defined: bool, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) List[ObjectiveThreshold][source]
classmethod make_objectives(objectives: Dict[str, str], metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) List[Objective][source]
classmethod make_optimization_config(objectives: Dict[str, str], objective_thresholds: List[str], outcome_constraints: List[str], status_quo_defined: bool, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) OptimizationConfig[source]
classmethod make_optimization_config_from_properties(objectives: Optional[Dict[str, ObjectiveProperties]] = None, outcome_constraints: Optional[List[str]] = None, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None, status_quo_defined: bool = False) Optional[OptimizationConfig][source]

Makes optimization config based on ObjectiveProperties objects

Parameters:
  • objectives – Mapping from an objective name to object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.

  • outcome_constraints – List of string representation of outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”

  • status_quo_defined – bool for whether the experiment has a status quo

  • metric_definitions – A mapping of metric names to extra kwargs to pass to that metric

classmethod make_outcome_constraints(outcome_constraints: List[str], status_quo_defined: bool, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) List[OutcomeConstraint][source]
classmethod make_search_space(parameters: List[Dict[str, Union[None, str, bool, float, int, Sequence[Union[None, str, bool, float, int]], Dict[str, List[str]]]]], parameter_constraints: Optional[List[str]]) SearchSpace[source]
classmethod objective_threshold_constraint_from_str(representation: str, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) ObjectiveThreshold[source]
static optimization_config_from_objectives(objectives: List[Objective], objective_thresholds: List[ObjectiveThreshold], outcome_constraints: List[OutcomeConstraint]) OptimizationConfig[source]

Parse objectives and constraints to define optimization config.

The resulting optimization config will be regular single-objective config if objectives is a list of one element and a multi-objective config otherwise.

NOTE: If passing in multiple objectives, objective_thresholds must be a non-empty list defining constraints for each objective.

classmethod outcome_constraint_from_str(representation: str, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) OutcomeConstraint[source]

Parse string representation of an outcome constraint.

classmethod parameter_from_json(representation: Dict[str, Union[None, str, bool, float, int, Sequence[Union[None, str, bool, float, int]], Dict[str, List[str]]]]) Parameter[source]

Instantiate a parameter from JSON representation.

class ax.service.utils.instantiation.MetricObjective(value)[source]

Bases: Enum

An enumeration.

MAXIMIZE = 2
MINIMIZE = 1
class ax.service.utils.instantiation.ObjectiveProperties(minimize: bool, threshold: Optional[float] = None)[source]

Bases: object

Class that holds properties of objective functions. Can be used to define an the objectives argument of ax_client.create_experiment, e.g.:

ax_client.create_experiment(

name=”moo_experiment”, parameters=[…], objectives={

# threshold arguments are optional “a”: ObjectiveProperties(minimize=False, threshold=ref_point[0]), “b”: ObjectiveProperties(minimize=False, threshold=ref_point[1]),

},

)

Parameters:
  • minimize (-) – Boolean indicating whether the objective is to be minimized or maximized.

  • threshold (-) – Optional float representing the smallest objective value (resp. largest if minimize=True) that is considered valuable in the context of multi-objective optimization. In BoTorch and in the literature, this is also known as an element of the reference point vector that defines the hyper-volume of the Pareto front.

minimize: bool
threshold: Optional[float] = None
ax.service.utils.instantiation.logger: Logger = <Logger ax.service.utils.instantiation (INFO)>

Utilities for RESTful-like instantiation of Ax classes needed in AxClient.

Reporting

WithDBSettingsBase

EarlyStopping

ax.service.utils.early_stopping.get_early_stopping_metrics(experiment: Experiment, early_stopping_strategy: Optional[BaseEarlyStoppingStrategy]) List[str][source]

A helper function that returns a list of metric names on which a given early_stopping_strategy is operating.

ax.service.utils.early_stopping.should_stop_trials_early(early_stopping_strategy: Optional[BaseEarlyStoppingStrategy], trial_indices: Set[int], experiment: Experiment) Dict[int, Optional[str]][source]

Evaluate whether to early-stop running trials.

Parameters:
  • early_stopping_strategy – A BaseEarlyStoppingStrategy that determines whether a trial should be stopped given the state of an experiment.

  • trial_indices – Indices of trials to consider for early stopping.

  • experiment – The experiment containing the trials.

Returns:

A dictionary mapping trial indices that should be early stopped to (optional) messages with the associated reason.