ax.service¶
Ax Client¶

class
ax.service.ax_client.
AxClient
(generation_strategy: Optional[ax.modelbridge.generation_strategy.GenerationStrategy] = None, db_settings: Optional[ax.storage.sqa_store.structs.DBSettings] = None, enforce_sequential_optimization: bool = True, random_seed: Optional[int] = None, verbose_logging: bool = True, suppress_storage_errors: bool = False)[source]¶ Bases:
ax.service.utils.with_db_settings_base.WithDBSettingsBase
Convenience handler for management of experimentation cycle through a servicelike API. External system manages scheduling of the cycle and makes calls to this client to get next suggestion in the experiment and log back data from the evaluation of that suggestion.
Note: AxClient expects to only propose 1 arm (suggestion) per trial; support for use cases that require use of batches is coming soon.
Two custom types used in this class for convenience are TParamValue and TParameterization. Those are shortcuts for Union[str, bool, float, int] and Dict[str, Union[str, bool, float, int]], respectively.
 Parameters
generation_strategy – Optional generation strategy. If not set, one is intelligently chosen based on properties of search space.
db_settings – Settings for saving and reloading the underlying experiment to a database. Expected to be of type ax.storage.sqa_store.structs.DBSettings and require SQLAlchemy.
enforce_sequential_optimization – Whether to enforce that when it is reasonable to switch models during the optimization (as prescribed by num_trials in generation strategy), Ax will wait for enough trials to be completed with data to proceed. Defaults to True. If set to False, Ax will keep generating new trials from the previous model until enough data is gathered. Use this only if necessary; otherwise, it is more resourceefficient to optimize sequentially, by waiting until enough data is available to use the next model.
random_seed –
Optional integer random seed, set to fix the optimization random seed for reproducibility. Works only for Sobol quasirandom generator and for BoTorchpowered models. For the latter models, the trials generated from the same optimization setup with the same seed, will be mostly similar, but the exact parameter values may still vary and trials latter in the optimizations will diverge more and more. This is because a degree of randomness is essential for high performance of the Bayesian optimization models and is not controlled by the seed.
Note: In multithreaded environments, the random seed is threadsafe, but does not actually guarantee reproducibility. Whether the outcomes will be exactly the same for two same operations that use the random seed, depends on whether the threads modify the random state in the same order across the two operations.
verbose_logging – Whether Ax should log significant optimization events, defaults to True.
suppress_storage_errors – Whether to suppress SQL storagerelated errors if encounted. Only use if SQL storage is not important for the given use case, since this will only log, but not raise, an exception if its encountered while saving to DB or loading from it.

BACH_TRIAL_RAW_DATA_FORMAT_ERROR_MESSAGE
= 'Raw data must be a dict for batched trials.'¶

TRIAL_RAW_DATA_FORMAT_ERROR_MESSAGE
= 'Raw data must be data for a single arm for non batched trials.'¶

abandon_trial
(trial_index: int, reason: Optional[str] = None) → None[source]¶ Abandons a trial and adds optional metadata to it.
 Parameters
trial_index – Index of trial within the experiment.

attach_trial
(parameters: Dict[str, Optional[Union[str, bool, float, int]]], ttl_seconds: Optional[int] = None) → Tuple[Dict[str, Optional[Union[str, bool, float, int]]], int][source]¶ Attach a new trial with the given parameterization to the experiment.
 Parameters
parameters – Parameterization of the new trial.
ttl_seconds – If specified, will consider the trial failed after this many seconds. Used to detect dead trials that were not marked failed properly.
 Returns
Tuple of parameterization and trial index from newly created trial.

complete_trial
(trial_index: int, raw_data: Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]], metadata: Optional[Dict[str, Union[str, int]]] = None, sample_size: Optional[int] = None) → None[source]¶ Completes the trial with given metric values and adds optional metadata to it.
NOTE: When
raw_data
does not specify SEM for a given metric, Ax will default to the assumption that the data is noisy (specifically, corrupted by additive zeromean Gaussian noise) and that the level of noise should be inferred by the optimization model. To indicate that the data is noiseless, set SEM to 0.0, for example:ax_client.complete_trial( trial_index=0, raw_data={"my_objective": (objective_mean_value, 0.0)} )
 Parameters
trial_index – Index of trial within the experiment.
raw_data – Evaluation data for the trial. Can be a mapping from metric name to a tuple of mean and SEM, just a tuple of mean and SEM if only one metric in optimization, or just the mean if SEM is unknown (then Ax will infer observation noise level). Can also be a list of (fidelities, mapping from metric name to a tuple of mean and SEM).
metadata – Additional metadata to track about this run.
sample_size – Number of samples collected for the underlying arm, optional.

create_experiment
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]]]]], name: Optional[str] = None, objective_name: Optional[str] = None, minimize: Optional[bool] = None, objectives: Optional[Dict[str, ax.service.utils.instantiation.ObjectiveProperties]] = None, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, status_quo: Optional[Dict[str, Optional[Union[str, bool, float, int]]]] = None, overwrite_existing_experiment: bool = False, experiment_type: Optional[str] = None, tracking_metric_names: Optional[List[str]] = None, choose_generation_strategy_kwargs: Optional[Dict[str, Any]] = None, support_intermediate_data: bool = False, immutable_search_space_and_opt_config: bool = True, is_test: bool = False) → None[source]¶ Create a new experiment and save it if DBSettings available.
 Parameters
parameters – List of dictionaries representing parameters in the experiment search space. Required elements in the dictionaries are: 1. “name” (name of parameter, string), 2. “type” (type of parameter: “range”, “fixed”, or “choice”, string), and one of the following: 3a. “bounds” for range parameters (list of two values, lower bound first), 3b. “values” for choice parameters (list of values), or 3c. “value” for fixed parameters (single value). Optional elements are: 1. “log_scale” (for floatvalued range parameters, bool), 2. “value_type” (to specify type that values of this parameter should take; expects “float”, “int”, “bool” or “str”), 3. “is_fidelity” (bool) and “target_value” (float) for fidelity parameters, 4. “is_ordered” (bool) for choice parameters, and 5. “is_task” (bool) for task parameters. 6. “digits” (int) for floatvalued range parameters.
name – Name of the experiment to be created.
objective_name[DEPRECATED] – Name of the metric used as objective in this experiment. This metric must be present in raw_data argument to complete_trial.
minimize[DEPRECATED] – Whether this experiment represents a minimization problem.
objectives – Mapping from an objective name to object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.
parameter_constraints – List of string representation of parameter constraints, such as “x3 >= x4” or “x3 + 2*x4  3.5*x5 >= 2”. For the latter constraints, any number of arguments is accepted, and acceptable operators are “<=” and “>=”.
outcome_constraints – List of string representation of outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”
status_quo – Parameterization of the current state of the system. If set, this will be added to each trial to be evaluated alongside test configurations.
overwrite_existing_experiment – If an experiment has already been set on this AxClient instance, whether to reset it to the new one. If overwriting the experiment, generation strategy will be reselected for the new experiment and restarted. To protect experiments in production, one cannot overwrite existing experiments if the experiment is already stored in the database, regardless of the value of overwrite_existing_experiment.
tracking_metric_names – Names of additional tracking metrics not used for optimization.
choose_generation_strategy_kwargs – Keyword arguments to pass to choose_generation_strategy function which determines what generation strategy should be used when none was specified on init.
support_intermediate_data – Whether trials may report intermediate results for trials that are still running (i.e. have not been completed via ax_client.complete_trial).
immutable_search_space_and_opt_config – Whether it’s possible to update the search space and optimization config on this experiment after creation. Defaults to True. If set to True, we won’t store or load copies of the search space and optimization config on each generator run, which will improve storage performance.
is_test – Whether this experiment will be a test experiment (useful for marking test experiments in storage etc). Defaults to False.

property
experiment
¶ Returns the experiment set on this Ax client.

classmethod
from_json_snapshot
(serialized: Dict[str, Any], **kwargs) → AxClientSubclass[source]¶ Recreate an AxClient from a JSON snapshot.

property
generation_strategy
¶ Returns the generation strategy, set on this experiment.

get_best_parameters
(use_model_predictions: bool = True) → Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶ Identifies the best parameterization tried in the experiment so far.
First attempts to do so with the model used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.
 NOTE:
TModelPredictArm
is of the form: ({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
 Parameters
use_model_predictions – Whether to extract the best point using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values. Returns
Tuple of parameterization and model predictions for it.
 NOTE:

get_contour_plot
(param_x: Optional[str] = None, param_y: Optional[str] = None, metric_name: Optional[str] = None) → ax.plot.base.AxPlotConfig[source]¶ Retrieves a plot configuration for a contour plot of the response surface. For response surfaces with more than two parameters, selected two parameters will appear on the axes, and remaining parameters will be affixed to the middle of their range. If contour params arguments are not provided, the first two parameters in the search space will be used. If contour metrics are not provided, objective will be used.
 Parameters
param_x – name of parameters to use on xaxis for the contour response surface plots.
param_y – name of parameters to use on yaxis for the contour response surface plots.
metric_name – Name of the metric, for which to plot the response surface.

get_current_trial_generation_limit
() → Tuple[int, bool][source]¶ How many trials this
AxClient
instance can currently produce via calls toget_next_trial
, before more trials are completed, and whether the optimization is complete.NOTE: If return value of this function is
(0, False)
, no more trials can currently be procuded by thisAxClient
instance, but optimization is not completed; once more trials are completed with data, more new trials can be generated. Returns: a twoitem tuple of:
the number of trials that can currently be produced, with 1 meaning unlimited trials,
whether no more trials can be produced by this
AxClient
instance at any point (e.g. if the search space is exhausted or generation strategy is completed.

get_feature_importances
(relative: bool = True) → ax.plot.base.AxPlotConfig[source]¶ Get a bar chart showing feature_importances for a metric.
A dropdown controls the metric for which the importances are displayed.
 Parameters
relative – Whether the values are displayed as percentiles or as raw importance metrics.

get_max_parallelism
() → List[Tuple[int, int]][source]¶ Retrieves maximum number of trials that can be scheduled in parallel at different stages of optimization.
Some optimization algorithms profit significantly from sequential optimization (i.e. suggest a few points, get updated with data for them, repeat, see https://ax.dev/docs/bayesopt.html). Parallelism setting indicates how many trials should be running simulteneously (generated, but not yet completed with data).
The output of this method is mapping of form {num_trials > max_parallelism_setting}, where the max_parallelism_setting is used for num_trials trials. If max_parallelism_setting is 1, as many of the trials can be ran in parallel, as necessary. If num_trials in a tuple is 1, then the corresponding max_parallelism_setting should be used for all subsequent trials.
For example, if the returned list is [(5, 1), (12, 6), (1, 3)], the schedule could be: run 5 trials with any parallelism, run 6 trials in parallel twice, run 3 trials in parallel for as long as needed. Here, ‘running’ a trial means obtaining a next trial from AxClient through get_next_trials and completing it with data when available.
 Returns
Mapping of form {num_trials > max_parallelism_setting}.

get_model_predictions
(metric_names: Optional[List[str]] = None) → Dict[int, Dict[str, Tuple[float, float]]][source]¶ Retrieve modelestimated means and covariances for all metrics. Note: this function retrieves the predictions for the ‘insample’ arms, which means that the return mapping on this function will only contain predictions for trials that have been completed with data.
 Parameters
metric_names – Names of the metrics, for which to retrieve predictions. All metrics on experiment will be retrieved if this argument was not specified.
 Returns
A mapping from trial index to a mapping of metric names to tuples of predicted metric mean and SEM, of form: { trial_index > { metric_name: ( mean, SEM ) } }.

get_next_trial
(ttl_seconds: Optional[int] = None) → Tuple[Dict[str, Optional[Union[str, bool, float, int]]], int][source]¶ Generate trial with the next set of parameters to try in the iteration process.
Note: Service API currently supports only 1arm trials.
 Parameters
ttl_seconds – If specified, will consider the trial failed after this many seconds. Used to detect dead trials that were not marked failed properly.
 Returns
Tuple of trial parameterization, trial index

get_next_trials
(max_trials: int, ttl_seconds: Optional[int] = None) → Tuple[Dict[int, Dict[str, Optional[Union[str, bool, float, int]]]], bool][source]¶ Generate as many trials as currently possible.
NOTE: Useful for running multiple trials in parallel: produces multiple trials, with their number limited by:
parallelism limit on current generation step,
number of trials in current generation step,
number of trials required to complete before moving to next generation step, if applicable,
and
max_trials
argument to this method.
 Parameters
max_trials – Limit on how many trials the call to this method should produce.
ttl_seconds – If specified, will consider the trial failed after this many seconds. Used to detect dead trials that were not marked failed properly.
 Returns: twoitem tuple of:
mapping from trial indices to parameterizations in those trials,
boolean indicator of whether optimization is completed and no more trials can be generated going forward.

get_optimization_trace
(objective_optimum: Optional[float] = None) → ax.plot.base.AxPlotConfig[source]¶ Retrieves the plot configuration for optimization trace, which shows the evolution of the objective mean over iterations.
 Parameters
objective_optimum – Optimal objective, if known, for display in the visualization.

get_pareto_optimal_parameters
(use_model_predictions: bool = True) → Optional[Dict[int, Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶ Identifies the best parameterizations tried in the experiment so far, using model predictions if
use_model_predictions
is true and using observed values from the experiment otherwise. By default, uses model predictions to account for observation noise.NOTE: The format of this method’s output is as follows: { trial_index –> (parameterization, (means, covariances) }, where means are a dictionary of form { metric_name –> metric_mean } and covariances are a nested dictionary of form { one_metric_name –> { another_metric_name: covariance } }.
 Parameters
use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values. Returns
None
if it was not possible to extract the Pareto frontier, otherwise a mapping from trial index to the tuple of:  the parameterization of the arm in that trial,  twoitem tuple of metric means dictionary and covariance matrix(modelpredicted if
use_model_predictions=True
and observed otherwise).

get_trial_parameters
(trial_index: int) → Dict[str, Optional[Union[str, bool, float, int]]][source]¶ Retrieve the parameterization of the trial by the given index.

load_experiment_from_database
(experiment_name: str, choose_generation_strategy_kwargs: Optional[Dict[str, Any]] = None) → None[source]¶ Load an existing experiment from database using the DBSettings passed to this AxClient on instantiation.
 Parameters
experiment_name – Name of the experiment.
 Returns
Experiment object.

classmethod
load_from_json_file
(filepath: str = 'ax_client_snapshot.json', **kwargs) → AxClientSubclass[source]¶ Restore an AxClient and its state from a JSONserialized snapshot, residing in a .json file by the given path.

log_trial_failure
(trial_index: int, metadata: Optional[Dict[str, str]] = None) → None[source]¶ Mark that the given trial has failed while running.
 Parameters
trial_index – Index of trial within the experiment.
metadata – Additional metadata to track about this run.

property
objective
¶

property
objective_name
¶ Returns the name of the objective in this optimization.

property
objective_names
¶ Returns the name of the objective in this optimization.

save_to_json_file
(filepath: str = 'ax_client_snapshot.json') → None[source]¶ Save a JSONserialized snapshot of this AxClient’s settings and state to a .json file by the given path.

to_json_snapshot
() → Dict[str, Any][source]¶ Serialize this AxClient to JSON to be able to interrupt and restart optimization and save it to file by the provided path.
 Returns
A JSONsafe dict representation of this AxClient.

update_running_trial_with_intermediate_data
(trial_index: int, raw_data: Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]], metadata: Optional[Dict[str, Union[str, int]]] = None, sample_size: Optional[int] = None) → None[source]¶ Updates the trial with given metric values without completing it. Also adds optional metadata to it. Useful for intermediate results like the metrics of a partially optimized machine learning model. In these cases it should be called instead of complete_trial until it is time to complete the trial.
NOTE: When
raw_data
does not specify SEM for a given metric, Ax will default to the assumption that the data is noisy (specifically, corrupted by additive zeromean Gaussian noise) and that the level of noise should be inferred by the optimization model. To indicate that the data is noiseless, set SEM to 0.0, for example:ax_client.update_trial( trial_index=0, raw_data={"my_objective": (objective_mean_value, 0.0)} )
 Parameters
trial_index – Index of trial within the experiment.
raw_data – Evaluation data for the trial. Can be a mapping from metric name to a tuple of mean and SEM, just a tuple of mean and SEM if only one metric in optimization, or just the mean if SEM is unknown (then Ax will infer observation noise level). Can also be a list of (fidelities, mapping from metric name to a tuple of mean and SEM).
metadata – Additional metadata to track about this run.
sample_size – Number of samples collected for the underlying arm, optional.

update_trial_data
(trial_index: int, raw_data: Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]], metadata: Optional[Dict[str, Union[str, int]]] = None, sample_size: Optional[int] = None) → None[source]¶ Attaches additional data for completed trial (for example, if trial was completed with data for only one of the required metrics and more data needs to be attached).
 Parameters
trial_index – Index of trial within the experiment.
raw_data – Evaluation data for the trial. Can be a mapping from metric name to a tuple of mean and SEM, just a tuple of mean and SEM if only one metric in optimization, or just the mean if there is no SEM. Can also be a list of (fidelities, mapping from metric name to a tuple of mean and SEM).
metadata – Additional metadata to track about this run.
sample_size – Number of samples collected for the underlying arm, optional.
Managed Loop¶

class
ax.service.managed_loop.
OptimizationLoop
(experiment: ax.core.experiment.Experiment, evaluation_function: Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], total_trials: int = 20, arms_per_trial: int = 1, random_seed: Optional[int] = None, wait_time: int = 0, run_async: bool = False, generation_strategy: Optional[ax.modelbridge.generation_strategy.GenerationStrategy] = None)[source]¶ Bases:
object
Managed optimization loop, in which Ax oversees deployment of trials and gathering data.

full_run
() → ax.service.managed_loop.OptimizationLoop[source]¶ Runs full optimization loop as defined in the provided optimization plan.

get_best_point
() → Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]][source]¶ Obtains the best point encountered in the course of this optimization.

get_current_model
() → Optional[ax.modelbridge.base.ModelBridge][source]¶ Obtain the most recently used model in optimization.

static
with_evaluation_function
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]]]]], evaluation_function: Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, wait_time: int = 0, random_seed: Optional[int] = None, generation_strategy: Optional[ax.modelbridge.generation_strategy.GenerationStrategy] = None) → OptimizationLoop[source]¶ Constructs a synchronous OptimizationLoop using an evaluation function.

classmethod
with_runners_and_metrics
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]]]]], path_to_runner: str, paths_to_metrics: List[str], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, wait_time: int = 0, random_seed: Optional[int] = None) → OptimizationLoop[source]¶ Constructs an asynchronous OptimizationLoop using Ax runners and metrics.


ax.service.managed_loop.
optimize
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]]]]], evaluation_function: Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, random_seed: Optional[int] = None, generation_strategy: Optional[ax.modelbridge.generation_strategy.GenerationStrategy] = None) → Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]], ax.core.experiment.Experiment, Optional[ax.modelbridge.base.ModelBridge]][source]¶ Construct and run a full optimization loop.
Utils¶
Best Point Identification¶

ax.service.utils.best_point.
get_best_from_model_predictions
(experiment: ax.core.experiment.Experiment) → Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶ Given an experiment, returns the best predicted parameterization and corresponding prediction based on the most recent Trial with predictions. If no trials have predictions returns None.
Only some models return predictions. For instance GPEI does while Sobol does not.
 TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
 Parameters
experiment – Experiment, on which to identify best raw objective arm.
 Returns
Tuple of parameterization and model predictions for it.

ax.service.utils.best_point.
get_best_parameters
(experiment: ax.core.experiment.Experiment, use_model_predictions: bool = True) → Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶ Given an experiment, identifies the best arm.
First attempts according to do so with models used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.
 TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
 Parameters
experiment – Experiment, on which to identify best raw objective arm.
use_model_predictions – Whether to extract the best point using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
 Returns
Tuple of parameterization and model predictions for it.

ax.service.utils.best_point.
get_best_raw_objective_point
(experiment: ax.core.experiment.Experiment, optimization_config: Optional[ax.core.optimization_config.OptimizationConfig] = None) → Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Tuple[float, float]]][source]¶ Given an experiment, identifies the arm that had the best raw objective, based on the data fetched from the experiment.
 Parameters
experiment – Experiment, on which to identify best raw objective arm.
optimization_config – Optimization config to use in absence or in place of the one stored on the experiment.
 Returns
 Tuple of parameterization and a mapping from metric name to a tuple of
the corresponding objective mean and SEM.

ax.service.utils.best_point.
get_pareto_optimal_parameters
(experiment: ax.core.experiment.Experiment, generation_strategy: ax.modelbridge.generation_strategy.GenerationStrategy, use_model_predictions: bool = True) → Optional[Dict[int, Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶ Identifies the best parameterizations tried in the experiment so far, using model predictions if
use_model_predictions
is true and using observed values from the experiment otherwise. By default, uses model predictions to account for observation noise.NOTE: The format of this method’s output is as follows: { trial_index –> (parameterization, (means, covariances) }, where means are a dictionary of form { metric_name –> metric_mean } and covariances are a nested dictionary of form { one_metric_name –> { another_metric_name: covariance } }.
 Parameters
experiment – Experiment, from which to find Paretooptimal arms.
generation_strategy – Generation strategy containing the modelbridge.
use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
 Returns
None
if it was not possible to extract the Pareto frontier, otherwise a mapping from trial index to the tuple of:  the parameterization of the arm in that trial,  twoitem tuple of metric means dictionary and covariance matrix(modelpredicted if
use_model_predictions=True
and observed otherwise).
Instantiation¶

class
ax.service.utils.instantiation.
MetricObjective
(value)[source]¶ Bases:
enum.Enum
An enumeration.

MAXIMIZE
= 2¶

MINIMIZE
= 1¶


class
ax.service.utils.instantiation.
ObjectiveProperties
(minimize: bool, threshold: Union[float, NoneType] = None)[source]¶ Bases:
object

ax.service.utils.instantiation.
build_objective_threshold
(objective: str, objective_properties: ax.service.utils.instantiation.ObjectiveProperties) → str[source]¶ Constructs constraint string for an objective threshold interpretable by make_experiment()
 Parameters
objective – Name of the objective
objective_properties – Object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.

ax.service.utils.instantiation.
constraint_from_str
(representation: str, parameters: Dict[str, ax.core.parameter.Parameter]) → ax.core.parameter_constraint.ParameterConstraint[source]¶ Parse string representation of a parameter constraint.

ax.service.utils.instantiation.
data_and_evaluations_from_raw_data
(raw_data: Dict[str, Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], metric_names: List[str], trial_index: int, sample_sizes: Dict[str, int], start_time: Optional[int] = None, end_time: Optional[int] = None) → Tuple[Dict[str, Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], ax.core.abstract_data.AbstractDataFrameData][source]¶ Transforms evaluations into Ax Data.
Each evaluation is either a trial evaluation: {metric_name > (mean, SEM)} or a fidelity trial evaluation for multifidelity optimizations: [(fidelities, {metric_name > (mean, SEM)})].
 Parameters
raw_data – Mapping from arm name to raw_data.
metric_names – Names of metrics used to transform raw data to evaluations.
trial_index – Index of the trial, for which the evaluations are.
sample_sizes – Number of samples collected for each arm, may be empty if unavailable.
start_time – Optional start time of run of the trial that produced this data, in milliseconds.
end_time – Optional end time of run of the trial that produced this data, in milliseconds.

ax.service.utils.instantiation.
logger
= <Logger ax.service.utils.instantiation (DEBUG)>¶ Utilities for RESTfullike instantiation of Ax classes needed in AxClient.

ax.service.utils.instantiation.
make_experiment
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]]]]], name: Optional[str] = None, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, status_quo: Optional[Dict[str, Optional[Union[str, bool, float, int]]]] = None, experiment_type: Optional[str] = None, tracking_metric_names: Optional[List[str]] = None, objective_name: Optional[str] = None, minimize: bool = False, objectives: Optional[Dict[str, str]] = None, objective_thresholds: Optional[List[str]] = None, support_intermediate_data: bool = False, immutable_search_space_and_opt_config: bool = True, is_test: bool = False) → ax.core.experiment.Experiment[source]¶ Instantiation wrapper that allows for Ax Experiment creation without importing or instantiating any Ax classes.
 Parameters
parameters – List of dictionaries representing parameters in the experiment search space. Required elements in the dictionaries are: 1. “name” (name of parameter, string), 2. “type” (type of parameter: “range”, “fixed”, or “choice”, string), and one of the following: 3a. “bounds” for range parameters (list of two values, lower bound first), 3b. “values” for choice parameters (list of values), or 3c. “value” for fixed parameters (single value). Optional elements are: 1. “log_scale” (for floatvalued range parameters, bool), 2. “value_type” (to specify type that values of this parameter should take; expects “float”, “int”, “bool” or “str”), 3. “is_fidelity” (bool) and “target_value” (float) for fidelity parameters, 4. “is_ordered” (bool) for choice parameters, 5. “is_task” (bool) for task parameters, and 6. “digits” (int) for floatvalued range parameters.
name – Name of the experiment to be created.
parameter_constraints – List of string representation of parameter constraints, such as “x3 >= x4” or “x3 + 2*x4  3.5*x5 >= 2”. For the latter constraints, any number of arguments is accepted, and acceptable operators are “<=” and “>=”.
outcome_constraints – List of string representation of outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”
status_quo – Parameterization of the current state of the system. If set, this will be added to each trial to be evaluated alongside test configurations.
experiment_type – String indicating type of the experiment (e.g. name of a product in which it is used), if any.
tracking_metric_names – Names of additional tracking metrics not used for optimization.
objective_name – Name of the metric used as objective in this experiment, if experiment is singleobjective optimization.
minimize – Whether this experiment represents a minimization problem, if experiment is a singleobjective optimization.
objectives – Mapping from an objective name to “minimize” or “maximize” representing the direction for that objective. Used only for multiobjective optimization experiments.
objective_thresholds – A list of objective threshold constraints for multi objective optimization, in the same string format as outcome_constraints argument.
support_intermediate_data – Whether trials may report metrics results for incomplete runs.
immutable_search_space_and_opt_config – Whether it’s possible to update the search space and optimization config on this experiment after creation. Defaults to True. If set to True, we won’t store or load copies of the search space and optimization config on each generator run, which will improve storage performance.
is_test – Whether this experiment will be a test experiment (useful for marking test experiments in storage etc). Defaults to False.

ax.service.utils.instantiation.
make_objective_thresholds
(objective_thresholds: List[str], status_quo_defined: bool) → List[ax.core.outcome_constraint.ObjectiveThreshold][source]¶

ax.service.utils.instantiation.
make_objectives
(objectives: Dict[str, str]) → List[ax.core.objective.Objective][source]¶

ax.service.utils.instantiation.
make_optimization_config
(objectives: Dict[str, str], objective_thresholds: List[str], outcome_constraints: List[str], status_quo_defined: bool) → ax.core.optimization_config.OptimizationConfig[source]¶

ax.service.utils.instantiation.
make_outcome_constraints
(outcome_constraints: List[str], status_quo_defined: bool) → List[ax.core.outcome_constraint.OutcomeConstraint][source]¶

ax.service.utils.instantiation.
make_search_space
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]]]]], parameter_constraints: List[str]) → ax.core.search_space.SearchSpace[source]¶

ax.service.utils.instantiation.
objective_threshold_constraint_from_str
(representation: str) → ax.core.outcome_constraint.ObjectiveThreshold[source]¶

ax.service.utils.instantiation.
optimization_config_from_objectives
(objectives: List[ax.core.objective.Objective], objective_thresholds: List[ax.core.outcome_constraint.ObjectiveThreshold], outcome_constraints: List[ax.core.outcome_constraint.OutcomeConstraint]) → ax.core.optimization_config.OptimizationConfig[source]¶ Parse objectives and constraints to define optimization config.
The resulting optimization config will be regular singleobjective config if objectives is a list of one element and a multiobjective config otherwise.
NOTE: If passing in multiple objectives, objective_thresholds must be a nonempty list definining constraints for each objective.

ax.service.utils.instantiation.
outcome_constraint_from_str
(representation: str) → ax.core.outcome_constraint.OutcomeConstraint[source]¶ Parse string representation of an outcome constraint.

ax.service.utils.instantiation.
parameter_from_json
(representation: Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]]]]) → ax.core.parameter.Parameter[source]¶ Instantiate a parameter from JSON representation.

ax.service.utils.instantiation.
raw_data_to_evaluation
(raw_data: Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]], metric_names: List[str], start_time: Optional[int] = None, end_time: Optional[int] = None) → Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]][source]¶ Format the trial evaluation data to a standard TTrialEvaluation (mapping from metric names to a tuple of mean and SEM) representation, or to a TMapTrialEvaluation.
Note: this function expects raw_data to be data for a Trial, not a BatchedTrial.
Reporting¶

ax.service.utils.report_utils.
exp_to_df
(exp: ax.core.experiment.Experiment, metrics: Optional[List[ax.core.metric.Metric]] = None, run_metadata_fields: Optional[List[str]] = None, trial_properties_fields: Optional[List[str]] = None, deduplicate_on_map_keys: bool = True, **kwargs: Any) → pandas.DataFrame[source]¶ Transforms an experiment to a DataFrame with rows keyed by trial_index and arm_name, metrics pivoted into one row. If the pivot results in more than one row per arm (or one row per
arm * map_keys
combination ifmap_keys
are present), results are omitted and warning is produced. Only supportsExperiment
.Transforms an
Experiment
into apd.DataFrame
. Parameters
exp – An
Experiment
that may have pending trials.metrics – Override list of metrics to return. Return all metrics if
None
.run_metadata_fields – fields to extract from
trial.run_metadata
for trial inexperiment.trials
. If there are multiple arms per trial, these fields will be replicated across the arms of a trial.trial_properties_fields – fields to extract from
trial._properties
for trial inexperiment.trials
. If there are multiple arms per trial, these fields will be replicated across the arms of a trial. Output columns names will be prepended with"trial_properties_"
.deduplicate_on_map_keys – Whether each
trial_index * arm_name * metric
combination should correspond to one row in the df output by this function. IfTrue
, for each such combination, keep the row of maximummap_keys
column(s) values. Note that ifmap_keys
is a list, its order may affect which row is kept. IfFalse
, keep rows for all unique combinations ofarm * map_keys
.**kwargs – Custom named arguments, useful for passing complex objects from callsite to the fetch_data callback.
 Returns
A dataframe of inputs, metadata and metrics by trial and arm (and
map_keys
, if present). If no trials are available, returns an empty dataframe. If no metric ouputs are available, returns a dataframe of inputs and metadata. Return type
DataFrame

ax.service.utils.report_utils.
get_best_trial
(exp: ax.core.experiment.Experiment, additional_metrics: Optional[List[ax.core.metric.Metric]] = None, run_metadata_fields: Optional[List[str]] = None, **kwargs: Any) → Optional[pandas.DataFrame][source]¶ Finds the optimal trial given an experiment, based on raw objective value.
Returns a 1row dataframe. Should match the row of
exp_to_df
with the best raw objective value, given the same arguments. Parameters
exp – An Experiment that may have pending trials.
additional_metrics – List of metrics to return in addition to the objective metric. Return all metrics if None.
run_metadata_fields – fields to extract from trial.run_metadata for trial in experiment.trials. If there are multiple arms per trial, these fields will be replicated across the arms of a trial.
**kwargs – Custom named arguments, useful for passing complex objects from callsite to the fetch_data callback.
 Returns
A dataframe of inputs and metrics of the optimal trial.
 Return type
DataFrame

ax.service.utils.report_utils.
get_standard_plots
(experiment: ax.core.experiment.Experiment, model: Optional[ax.modelbridge.base.ModelBridge], model_transitions: Optional[List[int]] = None) → List[plotly.graph_objs._figure.Figure][source]¶ Extract standard plots for singleobjective optimization.
Extracts a list of plots from an
Experiment
andModelBridge
of general interest to an Ax user. Currently not supported are  TODO: multiobjective optimization  TODO: ChoiceParameter plots Parameters
experiment () – The
Experiment
from which to obtain standard plots.model () – The
ModelBridge
used to suggest trial parameters.data () – If specified, data, to which to fit the model before generating plots.
model_transitions () – The arm numbers at which shifts in generation_strategy occur.
 Returns
a plot of objective value vs. trial index, to show experiment progression
a plot of objective value vs. range parameter values, only included if the model associated with generation_strategy can create predictions. This consists of:
a plot_slice plot if the search space contains one range parameter
an interact_contour plot if the search space contains multiple range parameters
WithDBSettingsBase¶

class
ax.service.utils.with_db_settings_base.
WithDBSettingsBase
(db_settings: Optional[ax.storage.sqa_store.structs.DBSettings] = None, logging_level: int = 20, suppress_all_errors: bool = False)[source]¶ Bases:
object
Helper class providing methods for saving changes made to an experiment if db_settings property is set to a nonNone value on the instance.

property
db_settings
¶ DB settings set on this instance; guaranteed to be nonNone.

property
db_settings_set
¶ Whether nonNone DB settings are set on this instance.

property
Scheduler¶

class
ax.service.scheduler.
ExperimentStatusProperties
(value)[source]¶ 
Enum for keys in experiment properties that represent status of optimization run through scheduler.

NUM_TRIALS_RUN_PER_CALL
= 'num_trials_run_per_call'¶

RESUMED_FROM_STORAGE_TIMESTAMPS
= 'resumed_from_storage_timestamps'¶

RUN_TRIALS_STATUS
= 'run_trials_success'¶


exception
ax.service.scheduler.
FailureRateExceededError
(message: str, hint: str = '')[source]¶ Bases:
ax.exceptions.core.AxError
Error that indicates the sweep was aborted due to excessive failure rate.

class
ax.service.scheduler.
RunTrialsStatus
(value)[source]¶ 
Possible statuses for each call to
Scheduler.run_trials_and_ yield_results
, used in recording experiment status.
ABORTED
= 'aborted'¶

STARTED
= 'started'¶

SUCCESS
= 'success'¶


class
ax.service.scheduler.
Scheduler
(experiment: ax.core.experiment.Experiment, generation_strategy: ax.modelbridge.generation_strategy.GenerationStrategy, options: ax.service.scheduler.SchedulerOptions, db_settings: Optional[ax.storage.sqa_store.structs.DBSettings] = None, _skip_experiment_save: bool = False)[source]¶ Bases:
ax.service.utils.with_db_settings_base.WithDBSettingsBase
,abc.ABC
Closedloop manager class for Ax optimization.

experiment
¶ Experiment, in which results of the optimization will be recorded.

generation_strategy
¶ Generation strategy for the optimization, describes models that will be used in optimization.

options
¶ SchedulerOptions for this scheduler instance.

db_settings
¶ Settings for saving and reloading the underlying experiment to a database. Expected to be of type ax.storage.sqa_store.structs.DBSettings and require SQLAlchemy.

_skip_experiment_save
¶ If True, scheduler will not resave the experiment passed to it. Use only if the experiment had just been saved, as otherwise experiment state could get corrupted.

property
candidate_trials
¶ Candidate trials on the experiment this scheduler is running.
 Returns
List of trials that are currently candidates.

completion_criterion
() → bool[source]¶ Optional stopping criterion for optimization, defaults to a check of whether total_trials trials have been run.
 Returns
Boolean representing whether the optimization should be stopped.

error_if_failure_rate_exceeded
(force_check: bool = False) → None[source]¶ Checks if the failure rate (set in scheduler options) has been exceeded.
 Parameters
force_check – Indicates whether to force a failurerate check regardless of the number of trials that have been executed. If False (default), the check will be skipped if the sweep has fewer than five failed iterations. If True, the check will be performed unless there are 0 failures.

experiment
: ax.core.experiment.Experiment¶

classmethod
from_stored_experiment
(experiment_name: str, options: ax.service.scheduler.SchedulerOptions, db_settings: Optional[ax.storage.sqa_store.structs.DBSettings] = None, generation_strategy: Optional[ax.modelbridge.generation_strategy.GenerationStrategy] = None, **kwargs: Any) → ax.service.scheduler.Scheduler[source]¶ Create a
Scheduler
with a previously stored experiment, which the scheduler should resume. Parameters
experiment_name – Experiment to load and resume.
options –
SchedulerOptions
, with which to set up the new scheduler.db_settings – Optional
DBSettings
, which to use for reloading the experiment; also passed asdb_settings
argument to the scheduler constructor.generation_strategy – Generation strategy to use to provide candidates for the resumed optimization. Provide this argument only if the experiment does not already have a generation strategy associated with it.
kwargs – Kwargs to pass through to the
Scheduler
constructor.

generation_strategy
: ax.modelbridge.generation_strategy.GenerationStrategy¶

classmethod
get_default_db_settings
() → ax.storage.sqa_store.structs.DBSettings[source]¶

has_capacity
(n: int = 1) → bool[source]¶ Optional method to checks if there is available capacity to schedule n trials.
 Parameters
n – Number of trials, the capacity to run which is being checked. Defaults to 1.
 Returns
A boolean, representing whether n trials can be ran.

property
has_trials_in_flight
¶ Whether the experiment on this scheduler currently has running or staged trials.

logger
: logging.LoggerAdapter¶

poll_and_process_results
() → bool[source]¶  Takes the following actions:
Poll trial runs for their statuses
If any experiment metrics are available while running, fetch data for running trials
Determine which trials should be early stopped
Earlystop those trials
Update the experiment with the new trial statuses
Fetch the data for newly completed trials
 Returns
A boolean representing whether any trial evaluations completed of have been marked as failed or abandoned, changing the number of currently running trials.

poll_available_capacity
() → Optional[int][source]¶ Optional method to checks how much available capacity there is to schedule trial evaluations.
 Returns
An optional integer, representing how many trials there is available capacity for, if available. If not available, returns None.

abstract
poll_trial_status
() → Dict[ax.core.base_trial.TrialStatus, Set[int]][source]¶ Required polling function, checks the status of any nonterminal trials and returns their indices as a mapping from TrialStatus to a list of indices.
NOTE: Does not need to handle waiting between polling while trials are running; that logic is handled in Scheduler.poll, which calls this function.
 Returns
A dictionary mapping TrialStatus to a list of trial indices that have the respective status at the time of the polling. This does not need to include trials that at the time of polling already have a terminal (ABANDONED, FAILED, COMPLETED) status (but it may).

report_results
() → Dict[str, Any][source]¶ Optional userdefined function for reporting intermediate and final optimization results (e.g. make some API call, write to some other db). This function is called whenever new results are available during the optimization.
 Returns
An optional dictionary with any relevant data about optimization.

run
(max_new_trials: int) → bool[source]¶ Schedules trial evaluation(s) if stopping criterion is not triggered, maximum parallelism is not currently reached, and capacity allows. Logs any failures / issues.
 Parameters
max_new_trials – Maximum number of new trials this function should generate and run (useful when generating and running trials in batches). Note that this function might also redeploy existing
CANDIDATE
trials that failed to deploy before, which will not count against this number. Returns
Boolean representing success status.

run_all_trials
(timeout_hours: Optional[int] = None) → ax.service.scheduler.OptimizationResult[source]¶ Run all trials until
completion_criterion
is reached (by default, completion criterion is reaching thenum_trials
setting, passed to scheduler on instantiation as part ofSchedulerOptions
).NOTE: This function is available only when
SchedulerOptions.num_trials
is specified. Parameters
timeout_hours – Limit on length of ths optimization; if reached, the optimization will abort even if completon criterion is not yet reached.

run_n_trials
(max_trials: int, timeout_hours: Optional[int] = None) → ax.service.scheduler.OptimizationResult[source]¶ Run up to
max_trials
trials; will run allmax_trials
unless completion criterion is reached. For baseScheduler
, completion criterion is reaching total number of trials set inSchedulerOptions
, so if that option is not specified, this function will run exactlymax_trials
trials always. Parameters
max_trials – Maximum number of trials to run.
timeout_hours – Limit on length of ths optimization; if reached, the optimization will abort even if completon criterion is not yet reached.

run_trial
(trial: ax.core.base_trial.BaseTrial) → Dict[str, Any][source]¶ Optional deployment function, runs a single evaluation of the given trial. Can be used instead of runner.run(trial) if no runner is defined on the experiment; will be required in that case.
NOTE: the retry_on_exception decorator applied to this function should also be applied to its subclassing override if one is provided and retry behavior is desired.
 Parameters
trial – Trial to be deployed, contains arms with parameterizations to be evaluated. Can be a Trial if contains only one arm or a BatchTrial if contains multiple arms.
 Returns
Dict of run metadata from the deployment process.

run_trials
(trials: Iterable[ax.core.base_trial.BaseTrial]) → Dict[int, Dict[str, Any]][source]¶ Optional deployment function, runs a single evaluation for each of the given trials. By default simply loops over run_trial. Should be overwritten if deploying multiple trials in batch is preferable.
NOTE: the retry_on_exception decorator applied to this function should also be applied to its subclassing override if one is provided and retry behavior is desired.
 Parameters
trials – Iterable of trials to be deployed, each containing arms with parameterizations to be evaluated. Can be a Trial if contains only one arm or a BatchTrial if contains multiple arms.
 Returns
Dict of trial index to the run metadata of that trial from the deployment process.

run_trials_and_yield_results
(max_trials: int, timeout_hours: Optional[int] = None) → Generator[Dict[str, Any], None, None][source]¶ Make continuous calls to run and process_results to run up to
max_trials
trials, until completion criterion is reached. This is the ‘main’ method of aScheduler
. Parameters
max_trials – Maximum number of trials to run in this generator. The generator will run trials
timeout_hours – Maximum number of hours, for which to run the optimization. This function will abort after running for timeout_hours even if stopping criterion has not been reached. If set to None, no optimization timeout will be applied.

property
running_trials
¶ Currently running trials.
 Returns
List of trials that are currently running.

should_abort_optimization
() → bool[source]¶ Checks whether this scheduler has reached some intertuption / abort criterion, such as an overall optimization timeout, tolerated failure rate, etc.

should_consider_optimization_complete
() → bool[source]¶ Whether this scheduler should consider this optimization complete and not run more trials (and conclude the optimization via
_complete_optimization
). An optimization is considered complete when a generation strategy signalled completion or when the customcompletion_criterion
on this scheduler evaluates toTrue
.

should_stop_trials_early
(trial_indices: Set[int]) → Dict[int, Optional[str]][source]¶ Evaluate whether to earlystop running trials.
 Parameters
trial_indices – Indices of trials to consider for early stopping.
 Returns
A set of indices of trials to earlystop (will be a subset of initiallypassed trials).

stop_trial_run
(trial: ax.core.base_trial.BaseTrial, reason: Optional[str] = None) → None[source]¶ Stops the job that executes a given trial.
 Parameters
trial – Trial to be stopped.
reason – The reason the trial is to be stopped.

stop_trial_runs
(trials: List[ax.core.base_trial.BaseTrial], reasons: Optional[List[Optional[str]]] = None) → None[source]¶ Stops the jobs that execute given trials.
Used if, for example, TTL for a trial was specified and expired, or poor early results suggest the trial is not worth running to completion.
Requires a runner to be defined on the experiment in this base class implementation, but can be overridden in subclasses to not require a runner.
Overwrite default implementation if its desirable to stop trials in bulk.
 Parameters
trials – Trials to be stopped.
reasons – A list of strings describing the reasons for why the trials are to be stopped (in the same order).

summarize_final_result
() → ax.service.scheduler.OptimizationResult[source]¶ Get some summary of result: which trial did best, what were the metric values, what were encountered failures, etc.

wait_for_completed_trials_and_report_results
() → Dict[str, Any][source]¶ Continuously poll for successful trials, with limited exponential backoff, and process the results. Stop once at least one successful trial has been found. This function can be overridden to a different waiting function as needed; it must call poll_and_process_results to ensure that trials that completed their evaluation are appropriately marked as ‘COMPLETED’ in Ax.
Returns: Results of the optimization so far, represented as a dict. The contents of the dict depend on the implementation of report_results in the given Scheduler subclass.


exception
ax.service.scheduler.
SchedulerInternalError
(message: str, hint: str = '')[source]¶ Bases:
ax.exceptions.core.AxError
Error that indicates an error within the Scheduler logic.

class
ax.service.scheduler.
SchedulerOptions
(trial_type: Type[ax.core.base_trial.BaseTrial] = <class 'ax.core.trial.Trial'>, total_trials: Optional[int] = None, tolerated_trial_failure_rate: float = 0.5, min_failed_trials_for_failure_rate_check: int = 5, log_filepath: Optional[str] = None, logging_level: int = 20, ttl_seconds_for_trials: Optional[int] = None, init_seconds_between_polls: Optional[int] = 1, min_seconds_before_poll: float = 1.0, seconds_between_polls_backoff_factor: float = 1.5, run_trials_in_batches: bool = False, debug_log_run_metadata: bool = False, early_stopping_strategy: Optional[ax.early_stopping.strategies.BaseEarlyStoppingStrategy] = None, suppress_storage_errors_after_retries: bool = False)[source]¶ Bases:
object
Settings for a scheduler instance.

trial_type
¶ Type of trials (1arm
Trial
or multiarmBatch Trial
) that will be deployed using the scheduler. Defaults to 1arm Trial. NOTE: useBatchTrial
only if need to evaluate multiple arms together, e.g. in an A/Btest influenced by data nonstationarity. For cases where just deploying multiple arms at once is beneficial but the trials are evaluated independently, implementrun_trials
method in scheduler subclass, to deploy multiple 1arm trials at the same time. Type

total_trials
¶ Limit on number of trials a given
Scheduler
should run. If no stopping criteria are implemented on a given scheduler, exhaustion of this number of trials will be used as default stopping criterion inScheduler.run_all_trials
. Required to be nonnull if usingScheduler.run_all_trials
(not required forScheduler.run_n_trials
). Type
Optional[int]

tolerated_trial_failure_rate
¶ Fraction of trials in this optimization that are allowed to fail without the whole optimization ending. Expects value between 0 and 1. NOTE: Failure rate checks begin once min_failed_trials_for_failure_rate_check trials have failed; after that point if the ratio of failed trials to total trials ran so far exceeds the failure rate, the optimization will halt.
 Type

min_failed_trials_for_failure_rate_check
¶ The minimum number of trials that must fail in Scheduler in order to start checking failure rate.
 Type

ttl_seconds_for_trials
¶ Optional TTL for all trials created within this
Scheduler
, in seconds. Trials that remainRUNNING
for more than their TTL seconds will be markedFAILED
once the TTL elapses and may be resuggested by the Ax optimization models. Type
Optional[int]

init_seconds_between_polls
¶ Initial wait between rounds of polling, in seconds. Relevant if using the default wait forcompletedruns functionality of the base
Scheduler
(ifwait_for_completed_trials_and_report_results
is not overridden). With the default waiting, every time a poll returns that no trial evaluations completed, wait time will increase; once some completed trial evaluations are found, it will reset back to this value. Specify 0 to not introduce any wait between polls. Type
Optional[int]

min_seconds_before_poll
¶ Minimum number of seconds between beginning to run a trial and the first poll to check trial status.
 Type

run_trials_in_batches
¶ If True and
poll_available_capacity
is implemented to return nonnull results, trials will be dispatched in groups via run_trials instead of onebyone viarun_trial
. This allows to save time, IO calls or computation in cases where dispatching trials in groups is more efficient then sequential deployment. The size of the groups will be determined as the minimum ofself.poll_available_capacity()
and the number of generator runs that the generation strategy is able to produce without more data or reaching its allowed max paralellism limit. Type

early_stopping_strategy
¶ A
BaseEarlyStoppingStrategy
that determines whether a trial should be stopped given the current state of the experiment. Used inshould_stop_trials_early
. Type
Optional[ax.early_stopping.strategies.BaseEarlyStoppingStrategy]

suppress_storage_errors_after_retries
¶ Whether to fully suppress SQL storagerelated errors if encounted, after retrying the call multiple times. Only use if SQL storage is not important for the given use case, since this will only log, but not raise, an exception if it’s encountered while saving to DB or loading from it.
 Type

early_stopping_strategy
: Optional[ax.early_stopping.strategies.BaseEarlyStoppingStrategy] = None¶

trial_type
¶ alias of
ax.core.trial.Trial
