ax.service¶
Ax Client¶
Managed Loop¶
- class ax.service.managed_loop.OptimizationLoop(experiment: Experiment, evaluation_function: Union[Callable[[Dict[str, Optional[Union[str, bool, float, int]]]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]], Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]]], total_trials: int = 20, arms_per_trial: int = 1, random_seed: Optional[int] = None, wait_time: int = 0, run_async: bool = False, generation_strategy: Optional[GenerationStrategy] = None)[source]¶
Bases:
object
Managed optimization loop, in which Ax oversees deployment of trials and gathering data.
- full_run() OptimizationLoop [source]¶
Runs full optimization loop as defined in the provided optimization plan.
- get_best_point() Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]] [source]¶
Obtains the best point encountered in the course of this optimization.
- get_current_model() Optional[ModelBridge] [source]¶
Obtain the most recently used model in optimization.
- static with_evaluation_function(parameters: List[Dict[str, Union[str, bool, float, int, None, Sequence[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], evaluation_function: Union[Callable[[Dict[str, Optional[Union[str, bool, float, int]]]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]], Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]]], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, wait_time: int = 0, random_seed: Optional[int] = None, generation_strategy: Optional[GenerationStrategy] = None) OptimizationLoop [source]¶
Constructs a synchronous OptimizationLoop using an evaluation function.
- classmethod with_runners_and_metrics(parameters: List[Dict[str, Union[str, bool, float, int, None, Sequence[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], path_to_runner: str, paths_to_metrics: List[str], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, wait_time: int = 0, random_seed: Optional[int] = None) OptimizationLoop [source]¶
Constructs an asynchronous OptimizationLoop using Ax runners and metrics.
- ax.service.managed_loop.optimize(parameters: List[Dict[str, Union[str, bool, float, int, None, Sequence[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], evaluation_function: Union[Callable[[Dict[str, Optional[Union[str, bool, float, int]]]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]], Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]], int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[int, float, floating, integer, Tuple[Union[int, float, floating, integer], Optional[Union[int, float, floating, integer]]]]]]]]]], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, random_seed: Optional[int] = None, generation_strategy: Optional[GenerationStrategy] = None) Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]], Experiment, Optional[ModelBridge]] [source]¶
Construct and run a full optimization loop.
Interactive Loop¶
Scheduler¶
- class ax.service.utils.scheduler_options.SchedulerOptions(max_pending_trials: int = 10, trial_type: TrialType = TrialType.TRIAL, batch_size: Optional[int] = None, total_trials: Optional[int] = None, tolerated_trial_failure_rate: float = 0.5, min_failed_trials_for_failure_rate_check: int = 5, log_filepath: Optional[str] = None, logging_level: int = 20, ttl_seconds_for_trials: Optional[int] = None, init_seconds_between_polls: Optional[int] = 1, min_seconds_before_poll: float = 1.0, seconds_between_polls_backoff_factor: float = 1.5, timeout_hours: Optional[float] = None, run_trials_in_batches: bool = False, debug_log_run_metadata: bool = False, early_stopping_strategy: Optional[BaseEarlyStoppingStrategy] = None, global_stopping_strategy: Optional[BaseGlobalStoppingStrategy] = None, suppress_storage_errors_after_retries: bool = False)[source]¶
Bases:
object
Settings for a scheduler instance.
- max_pending_trials¶
Maximum number of pending trials the scheduler can have
STAGED
orRUNNING
at once, required. If looking to useRunner.poll_available_capacity
as a primary guide for how many trials should be pending at a given time, set this limit to a high number, as an upper bound on number of trials that should not be exceeded.- Type:
- trial_type¶
Type of trials (1-arm
Trial
or multi-armBatch Trial
) that will be deployed using the scheduler. Defaults to 1-arm Trial. NOTE: useBatchTrial
only if need to evaluate multiple arms together, e.g. in an A/B-test influenced by data nonstationarity. For cases where just deploying multiple arms at once is beneficial but the trials are evaluated independently, implementrun_trials
method in scheduler subclass, to deploy multiple 1-arm trials at the same time.
- batch_size¶
If using BatchTrial the number of arms to be generated and deployed per trial.
- Type:
Optional[int]
- total_trials¶
Limit on number of trials a given
Scheduler
should run. If no stopping criteria are implemented on a given scheduler, exhaustion of this number of trials will be used as default stopping criterion inScheduler.run_all_trials
. Required to be non-null if usingScheduler.run_all_trials
(not required forScheduler.run_n_trials
).- Type:
Optional[int]
- tolerated_trial_failure_rate¶
Fraction of trials in this optimization that are allowed to fail without the whole optimization ending. Expects value between 0 and 1. NOTE: Failure rate checks begin once min_failed_trials_for_failure_rate_check trials have failed; after that point if the ratio of failed trials to total trials ran so far exceeds the failure rate, the optimization will halt.
- Type:
- min_failed_trials_for_failure_rate_check¶
The minimum number of trials that must fail in Scheduler in order to start checking failure rate.
- Type:
- ttl_seconds_for_trials¶
Optional TTL for all trials created within this
Scheduler
, in seconds. Trials that remainRUNNING
for more than their TTL seconds will be markedFAILED
once the TTL elapses and may be re-suggested by the Ax optimization models.- Type:
Optional[int]
- init_seconds_between_polls¶
Initial wait between rounds of polling, in seconds. Relevant if using the default wait- for-completed-runs functionality of the base
Scheduler
(ifwait_for_completed_trials_and_report_results
is not overridden). With the default waiting, every time a poll returns that no trial evaluations completed, wait time will increase; once some completed trial evaluations are found, it will reset back to this value. Specify 0 to not introduce any wait between polls.- Type:
Optional[int]
- min_seconds_before_poll¶
Minimum number of seconds between beginning to run a trial and the first poll to check trial status.
- Type:
- run_trials_in_batches¶
If True and
poll_available_capacity
is implemented to return non-null results, trials will be dispatched in groups via run_trials instead of one-by-one viarun_trial
. This allows to save time, IO calls or computation in cases where dispatching trials in groups is more efficient then sequential deployment. The size of the groups will be determined as the minimum ofself.poll_available_capacity()
and the number of generator runs that the generation strategy is able to produce without more data or reaching its allowed max paralellism limit.- Type:
- early_stopping_strategy¶
A
BaseEarlyStoppingStrategy
that determines whether a trial should be stopped given the current state of the experiment. Used inshould_stop_trials_early
.- Type:
Optional[ax.early_stopping.strategies.base.BaseEarlyStoppingStrategy]
- global_stopping_strategy¶
A
BaseGlobalStoppingStrategy
that determines whether the full optimization should be stopped or not.- Type:
Optional[ax.global_stopping.strategies.base.BaseGlobalStoppingStrategy]
- suppress_storage_errors_after_retries¶
Whether to fully suppress SQL storage-related errors if encountered, after retrying the call multiple times. Only use if SQL storage is not important for the given use case, since this will only log, but not raise, an exception if it’s encountered while saving to DB or loading from it.
- Type:
- early_stopping_strategy: Optional[BaseEarlyStoppingStrategy] = None¶
- global_stopping_strategy: Optional[BaseGlobalStoppingStrategy] = None¶
- class ax.service.utils.scheduler_options.TrialType(value)[source]¶
Bases:
Enum
An enumeration.
- BATCH_TRIAL = 1¶
- TRIAL = 0¶
Utils¶
Best Point Identification¶
- class ax.service.utils.best_point_mixin.BestPointMixin[source]¶
Bases:
object
- get_best_parameters(optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Identifies the best parameterization tried in the experiment so far.
First attempts to do so with the model used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.
- NOTE:
TModelPredictArm
is of the form: ({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters:
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
use_model_predictions – Whether to extract the best point using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
- Returns:
Tuple of parameterization and model predictions for it.
- NOTE:
- abstract get_best_trial(optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) Optional[Tuple[int, Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Identifies the best parameterization tried in the experiment so far.
First attempts to do so with the model used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.
- NOTE:
TModelPredictArm
is of the form: ({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters:
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
use_model_predictions – Whether to extract the best point using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
- Returns:
Tuple of trial index, parameterization and model predictions for it.
- NOTE:
- abstract get_hypervolume(optimization_config: Optional[MultiObjectiveOptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) float [source]¶
Calculate hypervolume of a pareto frontier based on either the posterior means of given observation features or observed data.
- Parameters:
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
- abstract get_pareto_optimal_parameters(optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) Optional[Dict[int, Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Identifies the best parameterizations tried in the experiment so far, using model predictions if
use_model_predictions
is true and using observed values from the experiment otherwise. By default, uses model predictions to account for observation noise.NOTE: The format of this method’s output is as follows: { trial_index –> (parameterization, (means, covariances) }, where means are a dictionary of form { metric_name –> metric_mean } and covariances are a nested dictionary of form { one_metric_name –> { another_metric_name: covariance } }.
- Parameters:
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
- Returns:
None
if it was not possible to extract the Pareto frontier, otherwise a mapping from trial index to the tuple of: - the parameterization of the arm in that trial, - two-item tuple of metric means dictionary and covariance matrix(model-predicted if
use_model_predictions=True
and observed otherwise).
- abstract get_trace() List[float] [source]¶
Get the optimization trace of the given experiment.
The output is equivalent to calling _get_hypervolume or _get_best_trial repeatedly, with an increasing sequence of trial_indices and with use_model_predictions = False, though this does it more efficiently.
- Parameters:
experiment – The experiment to get the trace for.
optimization_config – An optional optimization config to use for computing the trace. This allows computing the traces under different objectives or constraints without having to modify the experiment.
- Returns:
A list of observed hypervolumes or best values.
- abstract get_trace_by_progression(bins: Optional[List[float]] = None, final_progression_only: bool = False) Tuple[List[float], List[float]] [source]¶
Get the optimization trace with respect to trial progressions instead of trial_indices (which is the behavior used in get_trace). Note that this method does not take into account the parallelism of trials and essentially assumes that trials are run one after another, in the sense that it considers the total number of progressions “used” at the end of trial k to be the cumulative progressions “used” in trials 0,…,k. This method assumes that the final value of a particular trial is used and does not take the best value of a trial over its progressions.
The best observed value is computed at each value in bins (see below for details). If bins is not supplied, the method defaults to a heuristic of approximately NUM_BINS_PER_TRIAL per trial, where each trial is assumed to run until maximum progression (inferred from the data).
- Parameters:
experiment – The experiment to get the trace for.
optimization_config – An optional optimization config to use for computing the trace. This allows computing the traces under different objectives or constraints without having to modify the experiment.
bins – A list progression values at which to calculate the best observed value. The best observed value at bins[i] is defined as the value observed in trials 0,…,j where j = largest trial such that the total progression in trials 0,…,j is less than bins[i].
final_progression_only – If True, considers the value of the last step to be the value of the trial. If False, considers the best along the curve to be the value of the trial.
- Returns:
A tuple containing (1) the list of observed hypervolumes or best values and (2) a list of associated x-values (i.e., progressions) useful for plotting.
- ax.service.utils.best_point.extract_Y_from_data(experiment: Experiment, metric_names: List[str], data: Optional[Data] = None) Tuple[Tensor, Tensor] [source]¶
Converts the experiment observation data into a tensor.
NOTE: This requires block design for observations. It will error out if any trial is missing data for any of the given metrics or if the data is missing the trial_index.
- Parameters:
experiment – The experiment to extract the data from.
metric_names – List of metric names to extract data for.
data – An optional Data object to use instead of the experiment data. Note that the experiment must have a corresponding COMPLETED or EARLY_STOPPED trial for each trial_index in the data.
- Returns:
A two-element Tuple containing a tensor of observed metrics and a tensor of trial_indices.
- ax.service.utils.best_point.fill_missing_thresholds_from_nadir(experiment: Experiment, optimization_config: OptimizationConfig) List[ObjectiveThreshold] [source]¶
Get the objective thresholds from the optimization config and fill the missing thresholds based on the nadir point.
- Parameters:
experiment – The experiment, whose data is used to calculate the nadir point.
optimization_config – Optimization config to get the objective thresholds and the objective directions from.
- Returns:
A list of objective thresholds, one for each objective in optimization config.
- ax.service.utils.best_point.get_best_by_raw_objective(experiment: Experiment, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Given an experiment, identifies the arm that had the best raw objective, based on the data fetched from the experiment.
- TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters:
experiment – Experiment, on which to identify best raw objective arm.
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
- Returns:
Tuple of parameterization, and model predictions for it.
- ax.service.utils.best_point.get_best_by_raw_objective_with_trial_index(experiment: Experiment, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[int, Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Given an experiment, identifies the arm that had the best raw objective, based on the data fetched from the experiment.
- TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters:
experiment – Experiment, on which to identify best raw objective arm.
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
- Returns:
Tuple of trial index, parameterization, and model predictions for it.
- ax.service.utils.best_point.get_best_parameters(experiment: Experiment, models_enum: Type[ModelRegistryBase], optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Given an experiment, identifies the best arm.
First attempts according to do so with models used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.
- TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters:
experiment – Experiment, on which to identify best raw objective arm.
models_enum – Registry of all models that may be in the experiment’s generation strategy.
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
- Returns:
Tuple of parameterization and model predictions for it.
- ax.service.utils.best_point.get_best_parameters_from_model_predictions(experiment: Experiment, models_enum: Type[ModelRegistryBase], trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Given an experiment, returns the best predicted parameterization and corresponding prediction based on the most recent Trial with predictions. If no trials have predictions returns None.
Only some models return predictions. For instance GPEI does while Sobol does not.
- TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters:
experiment – Experiment, on which to identify best raw objective arm.
models_enum – Registry of all models that may be in the experiment’s generation strategy.
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
- Returns:
Tuple of parameterization and model predictions for it.
- ax.service.utils.best_point.get_best_parameters_from_model_predictions_with_trial_index(experiment: Experiment, models_enum: Type[ModelRegistryBase], optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[int, Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Given an experiment, returns the best predicted parameterization and corresponding prediction based on the most recent Trial with predictions. If no trials have predictions returns None.
Only some models return predictions. For instance GPEI does while Sobol does not.
- TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters:
experiment – Experiment, on which to identify best raw objective arm.
models_enum – Registry of all models that may be in the experiment’s generation strategy.
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
- Returns:
Tuple of trial index, parameterization, and model predictions for it.
- ax.service.utils.best_point.get_best_parameters_with_trial_index(experiment: Experiment, models_enum: Type[ModelRegistryBase], optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Optional[Tuple[int, Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]] [source]¶
Given an experiment, identifies the best arm.
First attempts according to do so with models used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.
- TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters:
experiment – Experiment, on which to identify best raw objective arm.
models_enum – Registry of all models that may be in the experiment’s generation strategy.
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
- Returns:
Tuple of trial index, parameterization, and model predictions for it.
- ax.service.utils.best_point.get_best_raw_objective_point(experiment: Experiment, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Tuple[float, float]]] [source]¶
- ax.service.utils.best_point.get_best_raw_objective_point_with_trial_index(experiment: Experiment, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None) Tuple[int, Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Tuple[float, float]]] [source]¶
Given an experiment, identifies the arm that had the best raw objective, based on the data fetched from the experiment.
- Parameters:
experiment – Experiment, on which to identify best raw objective arm.
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
- Returns:
- Tuple of parameterization and a mapping from metric name to a tuple of
the corresponding objective mean and SEM.
- ax.service.utils.best_point.get_pareto_optimal_parameters(experiment: Experiment, generation_strategy: GenerationStrategy, optimization_config: Optional[OptimizationConfig] = None, trial_indices: Optional[Iterable[int]] = None, use_model_predictions: bool = True) Dict[int, Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]] [source]¶
Identifies the best parameterizations tried in the experiment so far, using model predictions if
use_model_predictions
is true and using observed values from the experiment otherwise. By default, uses model predictions to account for observation noise.NOTE: The format of this method’s output is as follows: { trial_index –> (parameterization, (means, covariances) }, where means are a dictionary of form { metric_name –> metric_mean } and covariances are a nested dictionary of form { one_metric_name –> { another_metric_name: covariance } }.
- Parameters:
experiment – Experiment, from which to find Pareto-optimal arms.
generation_strategy – Generation strategy containing the modelbridge.
optimization_config – Optimization config to use in place of the one stored on the experiment.
trial_indices – Indices of trials for which to retrieve data. If None will retrieve data from all available trials.
use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
- Returns:
the parameterization of the arm in that trial,
- two-item tuple of metric means dictionary and covariance matrix
(model-predicted if
use_model_predictions=True
and observed otherwise).
- Return type:
A mapping from trial index to the tuple of
Instantiation¶
- class ax.service.utils.instantiation.FixedFeatures(parameters: Dict[str, Optional[Union[str, bool, float, int]]], trial_index: Optional[int] = None)[source]¶
Bases:
object
Class for representing fixed features via the Service API.
- class ax.service.utils.instantiation.InstantiationBase[source]¶
Bases:
object
This is a lightweight stateless class that bundles together instantiation utils. It is used both on its own and as a mixin to AxClient, with the intent that these methods can be overridden by its subclasses for specific use cases.
- static build_objective_threshold(objective: str, objective_properties: ObjectiveProperties) str [source]¶
Constructs constraint string for an objective threshold interpretable by make_experiment()
- Parameters:
objective – Name of the objective
objective_properties – Object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.
- classmethod build_objective_thresholds(objectives: Dict[str, ObjectiveProperties]) List[str] [source]¶
Construct a list of constraint string for an objective thresholds interpretable by make_experiment()
- Parameters:
objectives – Mapping of name of the objective to Object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.
- static constraint_from_str(representation: str, parameters: Dict[str, Parameter]) ParameterConstraint [source]¶
Parse string representation of a parameter constraint.
- classmethod make_experiment(parameters: List[Dict[str, Union[str, bool, float, int, None, Sequence[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], name: Optional[str] = None, description: Optional[str] = None, owners: Optional[List[str]] = None, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, status_quo: Optional[Dict[str, Optional[Union[str, bool, float, int]]]] = None, experiment_type: Optional[str] = None, tracking_metric_names: Optional[List[str]] = None, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None, objective_name: Optional[str] = None, minimize: bool = False, objectives: Optional[Dict[str, str]] = None, objective_thresholds: Optional[List[str]] = None, support_intermediate_data: bool = False, immutable_search_space_and_opt_config: bool = True, is_test: bool = False) Experiment [source]¶
Instantiation wrapper that allows for Ax Experiment creation without importing or instantiating any Ax classes.
- Parameters:
parameters – List of dictionaries representing parameters in the experiment search space. Required elements in the dictionaries are: 1. “name” (name of parameter, string), 2. “type” (type of parameter: “range”, “fixed”, or “choice”, string), and one of the following: 3a. “bounds” for range parameters (list of two values, lower bound first), 3b. “values” for choice parameters (list of values), or 3c. “value” for fixed parameters (single value). Optional elements are: 1. “log_scale” (for float-valued range parameters, bool), 2. “value_type” (to specify type that values of this parameter should take; expects “float”, “int”, “bool” or “str”), 3. “is_fidelity” (bool) and “target_value” (float) for fidelity parameters, 4. “is_ordered” (bool) for choice parameters, 5. “is_task” (bool) for task parameters, and 6. “digits” (int) for float-valued range parameters.
name – Name of the experiment to be created.
parameter_constraints – List of string representation of parameter constraints, such as “x3 >= x4” or “-x3 + 2*x4 - 3.5*x5 >= 2”. For the latter constraints, any number of arguments is accepted, and acceptable operators are “<=” and “>=”.
outcome_constraints – List of string representation of outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”
status_quo – Parameterization of the current state of the system. If set, this will be added to each trial to be evaluated alongside test configurations.
experiment_type – String indicating type of the experiment (e.g. name of a product in which it is used), if any.
tracking_metric_names – Names of additional tracking metrics not used for optimization.
objective_name – Name of the metric used as objective in this experiment, if experiment is single-objective optimization.
minimize – Whether this experiment represents a minimization problem, if experiment is a single-objective optimization.
objectives – Mapping from an objective name to “minimize” or “maximize” representing the direction for that objective. Used only for multi-objective optimization experiments.
objective_thresholds – A list of objective threshold constraints for multi- objective optimization, in the same string format as outcome_constraints argument.
support_intermediate_data – Whether trials may report metrics results for incomplete runs.
immutable_search_space_and_opt_config – Whether it’s possible to update the search space and optimization config on this experiment after creation. Defaults to True. If set to True, we won’t store or load copies of the search space and optimization config on each generator run, which will improve storage performance.
is_test – Whether this experiment will be a test experiment (useful for marking test experiments in storage etc). Defaults to False.
metric_definitions – A mapping of metric names to extra kwargs to pass to that metric
- static make_fixed_observation_features(fixed_features: FixedFeatures) ObservationFeatures [source]¶
Construct ObservationFeatures from FixedFeatures.
- Parameters:
fixed_features – The fixed features for generation.
- Returns:
The new ObservationFeatures object.
- classmethod make_objective_thresholds(objective_thresholds: List[str], status_quo_defined: bool, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) List[ObjectiveThreshold] [source]¶
- classmethod make_objectives(objectives: Dict[str, str], metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) List[Objective] [source]¶
- classmethod make_optimization_config(objectives: Dict[str, str], objective_thresholds: List[str], outcome_constraints: List[str], status_quo_defined: bool, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) OptimizationConfig [source]¶
- classmethod make_optimization_config_from_properties(objectives: Optional[Dict[str, ObjectiveProperties]] = None, outcome_constraints: Optional[List[str]] = None, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None, status_quo_defined: bool = False) Optional[OptimizationConfig] [source]¶
Makes optimization config based on ObjectiveProperties objects
- Parameters:
objectives – Mapping from an objective name to object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.
outcome_constraints – List of string representation of outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”
status_quo_defined – bool for whether the experiment has a status quo
metric_definitions – A mapping of metric names to extra kwargs to pass to that metric
- classmethod make_outcome_constraints(outcome_constraints: List[str], status_quo_defined: bool, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) List[OutcomeConstraint] [source]¶
- classmethod make_search_space(parameters: List[Dict[str, Union[str, bool, float, int, None, Sequence[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], parameter_constraints: Optional[List[str]]) SearchSpace [source]¶
- classmethod objective_threshold_constraint_from_str(representation: str, metric_definitions: Optional[Dict[str, Dict[str, Any]]] = None) ObjectiveThreshold [source]¶
- static optimization_config_from_objectives(objectives: List[Objective], objective_thresholds: List[ObjectiveThreshold], outcome_constraints: List[OutcomeConstraint]) OptimizationConfig [source]¶
Parse objectives and constraints to define optimization config.
The resulting optimization config will be regular single-objective config if objectives is a list of one element and a multi-objective config otherwise.
NOTE: If passing in multiple objectives, objective_thresholds must be a non-empty list defining constraints for each objective.
- class ax.service.utils.instantiation.MetricObjective(value)[source]¶
Bases:
Enum
An enumeration.
- MAXIMIZE = 2¶
- MINIMIZE = 1¶
- class ax.service.utils.instantiation.ObjectiveProperties(minimize: bool, threshold: Optional[float] = None)[source]¶
Bases:
object
Class that holds properties of objective functions. Can be used to define an the objectives argument of ax_client.create_experiment, e.g.:
- ax_client.create_experiment(
name=”moo_experiment”, parameters=[…], objectives={
# threshold arguments are optional “a”: ObjectiveProperties(minimize=False, threshold=ref_point[0]), “b”: ObjectiveProperties(minimize=False, threshold=ref_point[1]),
},
)
- Parameters:
minimize (-) – Boolean indicating whether the objective is to be minimized or maximized.
threshold (-) – Optional float representing the smallest objective value (resp. largest if minimize=True) that is considered valuable in the context of multi-objective optimization. In BoTorch and in the literature, this is also known as an element of the reference point vector that defines the hyper-volume of the Pareto front.
Reporting¶
- ax.service.utils.report_utils.compare_to_baseline(experiment: Experiment, optimization_config: Optional[OptimizationConfig], comparison_arm_names: Optional[List[str]], baseline_arm_name: Optional[str] = None) Optional[str] [source]¶
Calculate metric improvement of the experiment against baseline. Returns the message(s) added to markdown_messages.
- ax.service.utils.report_utils.compare_to_baseline_impl(comparison_list: List[Tuple[str, bool, str, float, str, float]]) Optional[str] [source]¶
Implementation of compare_to_baseline, taking in a list of arm comparisons. Can be used directly with the output of ‘maybe_extract_baseline_comparison_values’
- ax.service.utils.report_utils.compute_maximum_map_values(experiment: Experiment, map_key: Optional[str] = None) Dict[int, float] [source]¶
A function that returns a map from trial_index to the maximum map value reached. If map_key is not specified, it uses the first map_key.
- ax.service.utils.report_utils.exp_to_df(exp: Experiment, metrics: Optional[List[Metric]] = None, run_metadata_fields: Optional[List[str]] = None, trial_properties_fields: Optional[List[str]] = None, trial_attribute_fields: Optional[List[str]] = None, additional_fields_callables: Optional[Dict[str, Callable[[Experiment], Dict[int, Union[str, float]]]]] = None, always_include_field_columns: bool = False, **kwargs: Any) pandas.DataFrame [source]¶
Transforms an experiment to a DataFrame with rows keyed by trial_index and arm_name, metrics pivoted into one row. If the pivot results in more than one row per arm (or one row per
arm * map_keys
combination ifmap_keys
are present), results are omitted and warning is produced. Only supportsExperiment
.Transforms an
Experiment
into apd.DataFrame
.- Parameters:
exp – An
Experiment
that may have pending trials.metrics – Override list of metrics to return. Return all metrics if
None
.run_metadata_fields – Fields to extract from
trial.run_metadata
for trial inexperiment.trials
. If there are multiple arms per trial, these fields will be replicated across the arms of a trial.trial_properties_fields – Fields to extract from
trial._properties
for trial inexperiment.trials
. If there are multiple arms per trial, these fields will be replicated across the arms of a trial. Output columns names will be prepended with"trial_properties_"
.trial_attribute_fields – Fields to extract from trial attributes for each trial in
experiment.trials
. If there are multiple arms per trial, these fields will be replicated across the arms of a trial.additional_fields_callables – A dictionary of field names to callables, with each being a function from experiment to a trials_dict of the form {trial_index: value}. An example of a custom callable like this is the function compute_maximum_map_values.
always_include_field_columns – If True, even if all trials have missing values, include field columns anyway. Such columns are by default omitted (False).
- Returns:
A dataframe of inputs, metadata and metrics by trial and arm (and
map_keys
, if present). If no trials are available, returns an empty dataframe. If no metric ouputs are available, returns a dataframe of inputs and metadata. Columns include:trial_index
arm_name
trial_status
generation_method
any elements of exp.runner.run_metadata_report_keys that are present in the trial.run_metadata of each trial
one column per metric (named after the metric.name)
one column per parameter (named after the parameter.name)
- Return type:
DataFrame
- ax.service.utils.report_utils.get_figure_and_callback(plot_fn: Callable[[Scheduler], Figure]) Tuple[Figure, Callable[[Scheduler], None]] [source]¶
Produce a figure and a callback for updating the figure in place.
A likely use case is that plot_fn takes a Scheduler instance and returns a plotly Figure. Then get_figure_and_callback will produce a figure and callback that updates that figure according to plot_fn when the callback is passed to Scheduler.run_n_trials or Scheduler.run_all_trials.
- Parameters:
plot_fn – A function for producing a Plotly figure from a scheduler. If plot_fn raises a RuntimeError, the update wil be skipped and optimization will proceed.
Example
>>> def _plot(scheduler: Scheduler): >>> standard_plots = get_standard_plots(scheduler.experiment) >>> return standard_plots[0] >>> >>> fig, callback = get_figure_and_callback(_plot)
- ax.service.utils.report_utils.get_standard_plots(experiment: Experiment, model: Optional[ModelBridge], data: Optional[Data] = None, model_transitions: Optional[List[int]] = None, true_objective_metric_name: Optional[str] = None, early_stopping_strategy: Optional[BaseEarlyStoppingStrategy] = None, limit_points_per_plot: Optional[int] = None, global_sensitivity_analysis: bool = True) List[Figure] [source]¶
Extract standard plots for single-objective optimization.
Extracts a list of plots from an
Experiment
andModelBridge
of general interest to an Ax user. Currently not supported are - TODO: multi-objective optimization - TODO: ChoiceParameter plots- Parameters:
experiment (-) – The
Experiment
from which to obtain standard plots.model (-) – The
ModelBridge
used to suggest trial parameters.data (-) – If specified, data, to which to fit the model before generating plots.
model_transitions (-) – The arm numbers at which shifts in generation_strategy occur.
true_objective_metric_name (-) – Name of the metric to use as the true objective.
early_stopping_strategy (-) – Early stopping strategy used throughout the experiment; used for visualizing when curves are stopped.
limit_points_per_plot (-) – Limit the number of points used per metric in each curve plot. Passed to _get_curve_plot_dropdown.
global_sensitivity_analysis (-) – If True, plot total Variance-based sensitivity analysis for the model parameters. If False, plot sensitivities based on GP kernel lengthscales. Defaults to True.
- Returns:
a plot of objective value vs. trial index, to show experiment progression
a plot of objective value vs. range parameter values, only included if the model associated with generation_strategy can create predictions. This consists of:
a plot_slice plot if the search space contains one range parameter
an interact_contour plot if the search space contains multiple range parameters
- ax.service.utils.report_utils.maybe_extract_baseline_comparison_values(experiment: Experiment, optimization_config: Optional[OptimizationConfig], comparison_arm_names: Optional[List[str]], baseline_arm_name: Optional[str]) Optional[List[Tuple[str, bool, str, float, str, float]]] [source]¶
Extracts the baseline values from the experiment, for use in comparing the baseline arm to the optimal results. Requires the user specifies the names of the arms to compare to.
- Returns:
(metric_name, minimize, baseline_arm_name, baseline_value, comparison_arm_name, comparison_arm_value, )
- Return type:
List of tuples containing
WithDBSettingsBase¶
EarlyStopping¶
- ax.service.utils.early_stopping.get_early_stopping_metrics(experiment: Experiment, early_stopping_strategy: Optional[BaseEarlyStoppingStrategy]) List[str] [source]¶
A helper function that returns a list of metric names on which a given early_stopping_strategy is operating.
- ax.service.utils.early_stopping.should_stop_trials_early(early_stopping_strategy: Optional[BaseEarlyStoppingStrategy], trial_indices: Set[int], experiment: Experiment) Dict[int, Optional[str]] [source]¶
Evaluate whether to early-stop running trials.
- Parameters:
early_stopping_strategy – A
BaseEarlyStoppingStrategy
that determines whether a trial should be stopped given the state of an experiment.trial_indices – Indices of trials to consider for early stopping.
experiment – The experiment containing the trials.
- Returns:
A dictionary mapping trial indices that should be early stopped to (optional) messages with the associated reason.