ax

class ax.Arm(parameters, name=None)[source]

Base class for defining arms.

Randomization in experiments assigns units to a given arm. Thus, the arm encapsulates the parametrization needed by the unit.

clone(clear_name=False)[source]

Create a copy of this arm.

Parameters

clear_name (bool) – whether this cloned copy should set its name to None instead of the name of the arm being cloned. Defaults to False.

Return type

Arm

property has_name

Return true if arm’s name is not None.

Return type

bool

static md5hash(parameters)[source]

Return unique identifier for arm’s parameters.

Parameters

parameters (Dict[str, Union[str, bool, float, int, None]]) – Parameterization; mapping of param name to value.

Return type

str

Returns

Hash of arm’s parameters.

property name

Get arm name. Throws if name is None.

Return type

str

property name_or_short_signature

Returns arm name if exists; else last 4 characters of the hash.

Used for presentation of candidates (e.g. plotting and tables), where the candidates do not yet have names (since names are automatically set upon addition to a trial).

Return type

str

property parameters

Get mapping from parameter names to values.

Return type

Dict[str, Union[str, bool, float, int, None]]

property signature

Get unique representation of a arm.

Return type

str

class ax.BatchTrial(experiment, generator_run=None, trial_type=None, optimize_for_power=False, ttl_seconds=None)[source]

Batched trial that has multiple attached arms, meant to be deployed and evaluated together, and possibly arm weights, which are a measure of how much of the total resources allocated to evaluating a batch should go towards evaluating the specific arm. For instance, for field experiments the weights could describe the fraction of the total experiment population assigned to the different treatment arms. Interpretation of the weights is defined in Runner.

NOTE: A BatchTrial is not just a trial with many arms; it is a trial, for which it is important that the arms are evaluated simultaneously, e.g. in an A/B test where the evaluation results are subject to nonstationarity. For cases where multiple arms are evaluated separately and independently of each other, use multiple Trial objects with a single arm each.

Parameters
  • experiment (Experiment) – Experiment, to which this trial is attached

  • generator_run (Optional[GeneratorRun]) – GeneratorRun, associated with this trial. This can a also be set later through add_arm or add_generator_run, but a trial’s associated generator run is immutable once set.

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • optimize_for_power (Optional[bool]) – Whether to optimize the weights of arms in this trial such that the experiment’s power to detect effects of certain size is as high as possible. Refer to documentation of BatchTrial.set_status_quo_and_optimize_power for more detail.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

property abandoned_arms

List of arms that have been abandoned within this trial

Return type

List[Arm]

property arm_weights

The set of arms and associated weights for the trial.

These are constructed by merging the arms and weights from each generator run that is attached to the trial.

Return type

MutableMapping[Arm, float]

property arms

All arms contained in the trial.

Return type

List[Arm]

property arms_by_name

Map from arm name to object for all arms in trial.

Return type

Dict[str, Arm]

clone()[source]

Clone the trial.

Return type

BatchTrial

Returns

A new instance of the trial.

property experiment

The experiment this batch belongs to.

Return type

Experiment

property generator_run_structs

List of generator run structs attached to this trial.

Struct holds generator_run object and the weight with which it was added.

Return type

List[GeneratorRunStruct]

property index

The index of this batch within the experiment’s batch list.

Return type

int

property is_factorial

Return true if the trial’s arms are a factorial design with no linked factors.

Return type

bool

mark_arm_abandoned(arm_name, reason=None)[source]

Mark a arm abandoned.

Usually done after deployment when one arm causes issues but user wants to continue running other arms in the batch.

Parameters
  • arm_name (str) – The name of the arm to abandon.

  • reason (Optional[str]) – The reason for abandoning the arm.

Return type

BatchTrial

Returns

The batch instance.

normalized_arm_weights(total=1, trunc_digits=None)[source]

Returns arms with a new set of weights normalized to the given total.

This method is useful for many runners where we need to normalize weights to a certain total without mutating the weights attached to a trial.

Parameters
  • total (float) – The total weight to which to normalize. Default is 1, in which case arm weights can be interpreted as probabilities.

  • trunc_digits (Optional[int]) – The number of digits to keep. If the resulting total weight is not equal to total, re-allocate weight in such a way to maintain relative weights as best as possible.

Return type

MutableMapping[Arm, float]

Returns

Mapping from arms to the new set of weights.

run()[source]

Deploys the trial according to the behavior on the runner.

The runner returns a run_metadata dict containining metadata of the deployment process. It also returns a deployed_name of the trial within the system to which it was deployed. Both these fields are set on the trial.

Return type

BatchTrial

Returns

The trial instance.

property status_quo

The control arm for this batch.

Return type

Optional[Arm]

unset_status_quo()[source]

Set the status quo to None.

Return type

None

property weights

Weights corresponding to arms contained in the trial.

Return type

List[float]

class ax.ChoiceParameter(name, parameter_type, values, is_ordered=False, is_task=False, is_fidelity=False, target_value=None)[source]

Parameter object that specifies a discrete set of values.

add_values(values)[source]

Add input list to the set of allowed values for parameter.

Cast all input values to the parameter type.

Parameters

values (List[Union[str, bool, float, int, None]]) – Values being added to the allowed list.

Return type

ChoiceParameter

set_values(values)[source]

Set the list of allowed values for parameter.

Cast all input values to the parameter type.

Parameters

values (List[Union[str, bool, float, int, None]]) – New list of allowed values.

Return type

ChoiceParameter

validate(value)[source]

Checks that the input is in the list of allowed values.

Parameters

value (Union[str, bool, float, int, None]) – Value being checked.

Return type

bool

Returns

True if valid, False otherwise.

class ax.ComparisonOp[source]

Class for enumerating comparison operations.

class ax.Data(df=None, description=None)[source]

Class storing data for an experiment.

The dataframe is retrieved via the df property. The data can be stored to an external store for future use by attaching it to an experiment using experiment.attach_data() (this requires a description to be set.)

df

DataFrame with underlying data, and required columns.

description

Human-readable description of data.

static column_data_types()[source]

Type specification for all supported columns.

Return type

Dict[str, Type]

property df_hash

Compute hash of pandas DataFrame.

This first serializes the DataFrame and computes the md5 hash on the resulting string. Note that this may cause performance issue for very large DataFrames.

Parameters

df – The DataFrame for which to compute the hash.

Returns

str: The hash of the DataFrame.

Return type

str

static from_evaluations(evaluations, trial_index, sample_sizes=None, start_time=None, end_time=None)[source]

Convert dict of evaluations to Ax data object.

Parameters
  • evaluations (Dict[str, Dict[str, Tuple[float, Optional[float]]]]) – Map from arm name to metric outcomes (itself a mapping of metric names to tuples of mean and optionally a SEM).

  • trial_index (int) – Trial index to which this data belongs.

  • sample_sizes (Optional[Dict[str, int]]) – Number of samples collected for each arm.

  • start_time (Optional[int]) – Optional start time of run of the trial that produced this data, in milliseconds.

  • end_time (Optional[int]) – Optional end time of run of the trial that produced this data, in milliseconds.

Return type

Data

Returns

Ax Data object.

static from_fidelity_evaluations(evaluations, trial_index, sample_sizes=None, start_time=None, end_time=None)[source]

Convert dict of fidelity evaluations to Ax data object.

Parameters
  • evaluations (Dict[str, List[Tuple[Dict[str, Union[str, bool, float, int, None]], Dict[str, Tuple[float, Optional[float]]]]]]) – Map from arm name to list of (fidelity, metric outcomes) (where metric outcomes is itself a mapping of metric names to tuples of mean and SEM).

  • trial_index (int) – Trial index to which this data belongs.

  • sample_sizes (Optional[Dict[str, int]]) – Number of samples collected for each arm.

  • start_time (Optional[int]) – Optional start time of run of the trial that produced this data, in milliseconds.

  • end_time (Optional[int]) – Optional end time of run of the trial that produced this data, in milliseconds.

Return type

Data

Returns

Ax Data object.

static required_columns()[source]

Names of required columns.

Return type

Set[str]

class ax.Experiment(search_space, name=None, optimization_config=None, tracking_metrics=None, runner=None, status_quo=None, description=None, is_test=False, experiment_type=None)[source]

Base class for defining an experiment.

add_tracking_metric(metric)[source]

Add a new metric to the experiment.

Parameters

metric (Metric) – Metric to be added.

Return type

Experiment

add_tracking_metrics(metrics)[source]

Add a list of new metrics to the experiment.

If any of the metrics are already defined on the experiment, we raise an error and don’t add any of them to the experiment

Parameters

metrics (List[Metric]) – Metrics to be added.

Return type

Experiment

property arms_by_name

The arms belonging to this experiment, by their name.

Return type

Dict[str, Arm]

property arms_by_signature

The arms belonging to this experiment, by their signature.

Return type

Dict[str, Arm]

attach_data(data, combine_with_last_data=False)[source]

Attach data to experiment. Stores data in experiment._data_by_trial, to be looked up via experiment.lookup_data_by_trial.

Parameters
  • data (Data) – Data object to store.

  • combine_with_last_data (bool) – By default, when attaching data, it’s identified by its timestamp, and experiment.lookup_data_by_trial returns data by most recent timestamp. In some cases, however, the goal is to combine all data attached for a trial into a single Data object. To achieve that goal, every call to attach_data after the initial data is attached to trials, should be set to True. Then, the newly attached data will be appended to existing data, rather than stored as a separate object, and lookup_data_by_trial will return the combined data object, rather than just the most recently added data. This will validate that the newly added data does not contain observations for the metrics that already have observations in the most recent data stored.

Return type

int

Returns

Timestamp of storage in millis.

property data_by_trial

Data stored on the experiment, indexed by trial index and storage time.

First key is trial index and second key is storage time in milliseconds. For a given trial, data is ordered by storage time, so first added data will appear first in the list.

Return type

Dict[int, OrderedDict]

property default_trial_type

Default trial type assigned to trials in this experiment.

In the base experiment class this is always None. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type

Optional[str]

property experiment_type

The type of the experiment.

Return type

Optional[str]

fetch_data(metrics=None, **kwargs)[source]

Fetches data for all metrics and trials on this experiment.

Parameters
  • metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment.

  • kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions.

Return type

Data

Returns

Data for the experiment.

fetch_trials_data(trial_indices, metrics=None, **kwargs)[source]

Fetches data for specific trials on the experiment.

Parameters
  • trial_indices (Iterable[int]) – Indices of trials, for which to fetch data.

  • metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment.

  • kwargs (Any) – Keyword args to pass to underlying metrics’ fetch data functions.

Return type

Data

Returns

Data for the specific trials on the experiment.

get_trials_by_indices(trial_indices)[source]

Grabs trials on this experiment by their indices.

Return type

List[BaseTrial]

property has_name

Return true if experiment’s name is not None.

Return type

bool

property is_simple_experiment

Whether this experiment is a regular Experiment or the subclassing SimpleExperiment.

lookup_data_for_trial(trial_index)[source]

Lookup stored data for a specific trial.

Returns latest data object, and its storage timestamp, present for this trial. Returns empty data and -1 if no data present.

Parameters

trial_index (int) – The index of the trial to lookup data for.

Return type

Tuple[Data, int]

Returns

The requested data object, and its storage timestamp in milliseconds.

lookup_data_for_ts(timestamp)[source]

Collect data for all trials stored at this timestamp.

Useful when many trials’ data was fetched and stored simultaneously and user wants to retrieve same collection of data later.

Can also be used to lookup specific data for a single trial when storage time is known.

Parameters

timestamp (int) – Timestamp in millis at which data was stored.

Return type

Data

Returns

Data object with all data stored at the timestamp.

property metrics

The metrics attached to the experiment.

Return type

Dict[str, Metric]

property name

Get experiment name. Throws if name is None.

Return type

str

new_batch_trial(generator_run=None, trial_type=None, optimize_for_power=False, ttl_seconds=None)[source]

Create a new batch trial associated with this experiment.

Parameters
  • generator_run (Optional[GeneratorRun]) – GeneratorRun, associated with this trial. This can a also be set later through add_arm or add_generator_run, but a trial’s associated generator run is immutable once set.

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • optimize_for_power (Optional[bool]) – Whether to optimize the weights of arms in this trial such that the experiment’s power to detect effects of certain size is as high as possible. Refer to documentation of BatchTrial.set_status_quo_and_optimize_power for more detail.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

Return type

BatchTrial

new_trial(generator_run=None, trial_type=None, ttl_seconds=None)[source]

Create a new trial associated with this experiment.

Parameters
  • generator_run (Optional[GeneratorRun]) – GeneratorRun, associated with this trial. Trial has only one generator run (and thus arm) attached to it. This can also be set later through add_arm or add_generator_run, but a trial’s associated generator run is immutable once set.

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

Return type

Trial

property num_abandoned_arms

How many arms attached to this experiment are abandoned.

Return type

int

property num_trials

How many trials are associated with this experiment.

Return type

int

property optimization_config

The experiment’s optimization config.

Return type

Optional[OptimizationConfig]

property parameters

The parameters in the experiment’s search space.

Return type

Dict[str, Parameter]

remove_tracking_metric(metric_name)[source]

Remove a metric that already exists on the experiment.

Parameters

metric_name (str) – Unique name of metric to remove.

Return type

Experiment

reset_runners(runner)[source]

Replace all candidate trials runners.

Parameters

runner (Runner) – New runner to replace with.

Return type

None

runner_for_trial(trial)[source]

The default runner to use for a given trial.

In the base experiment class, this is always the default experiment runner. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type

Optional[Runner]

property search_space

The search space for this experiment.

When setting a new search space, all parameter names and types must be preserved. However, if no trials have been created, all modifications are allowed.

Return type

SearchSpace

property status_quo

The existing arm that new arms will be compared against.

Return type

Optional[Arm]

property sum_trial_sizes

Sum of numbers of arms attached to each trial in this experiment.

Return type

int

supports_trial_type(trial_type)[source]

Whether this experiment allows trials of the given type.

The base experiment class only supports None. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type

bool

property time_created

Creation time of the experiment.

Return type

datetime

property trial_indices_by_status

Indices of trials associated with the experiment, grouped by trial status.

Return type

Dict[TrialStatus, Set[int]]

property trials

The trials associated with the experiment.

NOTE: If some trials on this experiment specify their TTL, RUNNING trials will be checked for whether their TTL elapsed during this call. Found past- TTL trials will be marked as FAILED.

Return type

Dict[int, BaseTrial]

property trials_by_status

Trials associated with the experiment, grouped by trial status.

Return type

Dict[TrialStatus, List[BaseTrial]]

property trials_expecting_data

the list of all trials for which data has arrived or is expected to arrive.

Type

List[BaseTrial]

Return type

List[BaseTrial]

update_tracking_metric(metric)[source]

Redefine a metric that already exists on the experiment.

Parameters

metric (Metric) – New metric definition.

Return type

Experiment

class ax.FixedParameter(name, parameter_type, value, is_fidelity=False, target_value=None)[source]

Parameter object that specifies a single fixed value.

validate(value)[source]

Checks that the input is equal to the fixed value.

Parameters

value (Union[str, bool, float, int, None]) – Value being checked.

Return type

bool

Returns

True if valid, False otherwise.

class ax.GeneratorRun(arms, weights=None, optimization_config=None, search_space=None, model_predictions=None, best_arm_predictions=None, type=None, fit_time=None, gen_time=None, model_key=None, model_kwargs=None, bridge_kwargs=None, gen_metadata=None, model_state_after_gen=None, generation_step_index=None, candidate_metadata_by_arm_signature=None)[source]

An object that represents a single run of a generator.

This object is created each time the gen method of a generator is called. It stores the arms and (optionally) weights that were generated by the run. When we add a generator run to a trial, its arms and weights will be merged with those from previous generator runs that were already attached to the trial.

property arm_signatures

Returns signatures of arms generated by this run.

Return type

Set[str]

property arm_weights

Mapping from arms to weights (order matches order in arms property).

Return type

MutableMapping[Arm, float]

property arms

Returns arms generated by this run.

Return type

List[Arm]

property candidate_metadata_by_arm_signature

Retrieves model-produced candidate metadata as a mapping from arm name (for the arm the candidate became when added to experiment) to the metadata dict.

Return type

Optional[Dict[str, Optional[Dict[str, Any]]]]

clone()[source]

Return a deep copy of a GeneratorRun.

Return type

GeneratorRun

property gen_metadata

Returns metadata generated by this run.

Return type

Optional[Dict[str, Any]]

property generator_run_type

The type of the generator run.

Return type

Optional[str]

property index

The index of this generator run within a trial’s list of generator run structs. This field is set when the generator run is added to a trial.

Return type

Optional[int]

property optimization_config

The optimization config used during generation of this run.

Return type

Optional[OptimizationConfig]

property param_df

Constructs a Pandas dataframe with the parameter values for each arm.

Useful for inspecting the contents of a generator run.

Returns

a dataframe with the generator run’s arms.

Return type

pd.DataFrame

property search_space

The search used during generation of this run.

Return type

Optional[SearchSpace]

split_by_arm(populate_all_fields=False)[source]

Return a list of generator runs, each with all the metadata of generator run, but only with one of its arms. Useful when splitting a single generator run into multiple 1-arm trials.

Parameters

populate_all_fields (bool) – By default, split_by_arm only sets some fields on the new, ‘split’ generator runs, in order to avoid creating multiple large objects and increasing the size of an experiment object. To force-populate all fields of the ‘split’ generator runs, set ‘populate_all_fields’ to True.

Return type

List[GeneratorRun]

property time_created

Creation time of the batch.

Return type

datetime

property weights

Returns weights associated with arms generated by this run.

Return type

List[float]

class ax.Metric(name, lower_is_better=None)[source]

Base class for representing metrics.

lower_is_better

Flag for metrics which should be minimized.

clone()[source]

Create a copy of this Metric.

Return type

ForwardRef

fetch_experiment_data(experiment, **kwargs)[source]

Fetch this metric’s data for an experiment.

Default behavior is to fetch data from all trials expecting data and concatenate the results.

Return type

Data

classmethod fetch_experiment_data_multi(experiment, metrics, trials=None, **kwargs)[source]

Fetch multiple metrics data for an experiment.

Default behavior calls fetch_trial_data_multi for each trial. Subclasses should override to batch data computation across trials + metrics.

Return type

Data

fetch_trial_data(trial, **kwargs)[source]

Fetch data for one trial.

Return type

Data

classmethod fetch_trial_data_multi(trial, metrics, **kwargs)[source]

Fetch multiple metrics data for one trial.

Default behavior calls fetch_trial_data for each metric. Subclasses should override this to trial data computation for multiple metrics.

Return type

Data

property name

Get name of metric.

Return type

str

class ax.Models[source]

Registry of available models.

Uses MODEL_KEY_TO_MODEL_SETUP to retrieve settings for model and model bridge, by the key stored in the enum value.

To instantiate a model in this enum, simply call an enum member like so: Models.SOBOL(search_space=search_space) or Models.GPEI(experiment=experiment, data=data). Keyword arguments specified to the call will be passed into the model or the model bridge constructors according to their keyword.

For instance, Models.SOBOL(search_space=search_space, scramble=False) will instantiate a RandomModelBridge(search_space=search_space) with a SobolGenerator(scramble=False) underlying model.

property model_bridge_class

Type of ModelBridge used for the given model+bridge setup.

Return type

Type[ModelBridge]

property model_class

Type of Model used for the given model+bridge setup.

Return type

Type[Model]

view_defaults()[source]

Obtains the default keyword arguments for the model and the modelbridge specified through the Models enum, for ease of use in notebook environment, since models and bridges cannot be inspected directly through the enum.

Return type

Tuple[Dict[str, Any], Dict[str, Any]]

Returns

A tuple of default keyword arguments for the model and the model bridge.

view_kwargs()[source]

Obtains annotated keyword arguments that the model and the modelbridge (corresponding to a given member of the Models enum) constructors expect.

Return type

Tuple[Dict[str, Any], Dict[str, Any]]

Returns

A tuple of annotated keyword arguments for the model and the model bridge.

class ax.Objective(metric, minimize=None)[source]

Base class for representing an objective.

minimize

If True, minimize metric.

clone()[source]

Create a copy of the objective.

Return type

Objective

property metric

Get the objective metric.

Return type

Metric

property metrics

Get a list of objective metrics.

Return type

List[Metric]

class ax.OptimizationConfig(objective, outcome_constraints=None)[source]

An optimization configuration, which comprises an objective and outcome constraints.

There is no minimum or maximum number of outcome constraints, but an individual metric can have at most two constraints–which is how we represent metrics with both upper and lower bounds.

clone()[source]

Make a copy of this optimization config.

Return type

OptimizationConfig

property objective

Get objective.

Return type

Objective

property outcome_constraints

Get outcome constraints.

Return type

List[OutcomeConstraint]

class ax.OptimizationLoop(experiment, total_trials=20, arms_per_trial=1, random_seed=None, wait_time=0, run_async=False, generation_strategy=None)[source]

Managed optimization loop, in which Ax oversees deployment of trials and gathering data.

full_run()[source]

Runs full optimization loop as defined in the provided optimization plan.

Return type

OptimizationLoop

get_best_point()[source]

Obtains the best point encountered in the course of this optimization.

Return type

Tuple[Dict[str, Union[str, bool, float, int, None]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]

get_current_model()[source]

Obtain the most recently used model in optimization.

Return type

Optional[ModelBridge]

run_trial()[source]

Run a single step of the optimization plan.

Return type

None

static with_evaluation_function(parameters, evaluation_function, experiment_name=None, objective_name=None, minimize=False, parameter_constraints=None, outcome_constraints=None, total_trials=20, arms_per_trial=1, wait_time=0, random_seed=None, generation_strategy=None)[source]

Constructs a synchronous OptimizationLoop using an evaluation function.

Return type

OptimizationLoop

classmethod with_runners_and_metrics(parameters, path_to_runner, paths_to_metrics, experiment_name=None, objective_name=None, minimize=False, parameter_constraints=None, outcome_constraints=None, total_trials=20, arms_per_trial=1, wait_time=0, random_seed=None)[source]

Constructs an asynchronous OptimizationLoop using Ax runners and metrics.

Return type

OptimizationLoop

class ax.OrderConstraint(lower_parameter, upper_parameter)[source]

Constraint object for specifying one parameter to be smaller than another.

clone()[source]

Clone.

Return type

OrderConstraint

clone_with_transformed_parameters(transformed_parameters)[source]

Clone, but replace parameters with transformed versions.

Return type

OrderConstraint

property constraint_dict

Weights on parameters for linear constraint representation.

Return type

Dict[str, float]

property lower_parameter

Parameter with lower value.

Return type

Parameter

property parameters

Parameters.

Return type

List[Parameter]

property upper_parameter

Parameter with higher value.

Return type

Parameter

class ax.OutcomeConstraint(metric, op, bound, relative=True)[source]

Base class for representing outcome constraints.

Outcome constraints may of the form metric >= bound or metric <= bound, where the bound can be expressed as an absolute measurement or relative to the status quo (if applicable).

metric

Metric to constrain.

op

Specifies whether metric should be greater or equal to, or less than or equal to, some bound.

bound

The bound in the constraint.

relative

Whether you want to bound on an absolute or relative scale. If relative, bound is the acceptable percent change.

clone()[source]

Create a copy of this OutcomeConstraint.

Return type

OutcomeConstraint

class ax.Parameter[source]
is_valid_type(value)[source]

Whether a given value’s type is allowed by this parameter.

Return type

bool

property python_type

The python type for the corresponding ParameterType enum.

Used primarily for casting values of unknown type to conform to that of the parameter.

Return type

Union[Type[int], Type[float], Type[str], Type[bool]]

class ax.ParameterConstraint(constraint_dict, bound)[source]

Base class for linear parameter constraints.

Constraints are expressed using a map from parameter name to weight followed by a bound.

The constraint is satisfied if w * v <= b where:

w is the vector of parameter weights. v is a vector of parameter values. b is the specified bound. * is the dot product operator.

property bound

Get bound of the inequality of the constraint.

Return type

float

check(parameter_dict)[source]

Whether or not the set of parameter values satisfies the constraint.

Does a weighted sum of the parameter values based on the constraint_dict and checks that the sum is less than the bound.

Parameters

parameter_dict (Dict[str, Union[int, float]]) – Map from parameter name to parameter value.

Return type

bool

Returns

Whether the constraint is satisfied.

clone()[source]

Clone.

Return type

ParameterConstraint

clone_with_transformed_parameters(transformed_parameters)[source]

Clone, but replaced parameters with transformed versions.

Return type

ParameterConstraint

property constraint_dict

Get mapping from parameter names to weights.

Return type

Dict[str, float]

class ax.ParameterType[source]

An enumeration.

class ax.RangeParameter(name, parameter_type, lower, upper, log_scale=False, digits=None, is_fidelity=False, target_value=None)[source]

Parameter object that specifies a continuous numerical range of values.

property digits

Number of digits to round values to for float type.

Upper and lower bound are re-cast after this property is changed.

Return type

Optional[int]

is_valid_type(value)[source]

Same as default except allows floats whose value is an int for Int parameters.

Return type

bool

property log_scale

Whether to sample in log space when drawing random values of the parameter.

Return type

bool

property lower

Lower bound of the parameter range.

Value is cast to parameter type upon set and also validated to ensure the bound is strictly less than upper bound.

Return type

float

update_range(lower=None, upper=None)[source]

Set the range to the given values.

If lower or upper is not provided, it will be left at its current value.

Parameters
Return type

RangeParameter

property upper

Upper bound of the parameter range.

Value is cast to parameter type upon set and also validated to ensure the bound is strictly greater than lower bound.

Return type

float

validate(value)[source]

Returns True if input is a valid value for the parameter.

Checks that value is of the right type and within the valid range for the parameter. Returns False if value is None.

Parameters

value (Union[str, bool, float, int, None]) – Value being checked.

Return type

bool

Returns

True if valid, False otherwise.

class ax.Runner[source]

Abstract base class for custom runner classes

abstract run(trial)[source]

Deploys a trial based on custom runner subclass implementation.

Parameters

trial (BaseTrial) – The trial to deploy.

Return type

Dict[str, Any]

Returns

Dict of run metadata from the deployment process.

property staging_required

Whether the trial goes to staged or running state once deployed.

Return type

bool

stop(trial)[source]

Stop a trial based on custom runner subclass implementation.

Optional to implement

Parameters

trial (BaseTrial) – The trial to deploy.

Return type

None

class ax.SearchSpace(parameters, parameter_constraints=None)[source]

Base object for SearchSpace object.

Contains a set of Parameter objects, each of which have a name, type, and set of valid values. The search space also contains a set of ParameterConstraint objects, which can be used to define restrictions across parameters (e.g. p_a < p_b).

cast_arm(arm)[source]

Cast parameterization of given arm to the types in this SearchSpace.

For each parameter in given arm, cast it to the proper type specified in this search space. Throws if there is a mismatch in parameter names. This is mostly useful for int/float, which user can be sloppy with when hand written.

Parameters

arm (Arm) – Arm to cast.

Return type

Arm

Returns

New casted arm.

check_membership(parameterization, raise_error=False)[source]

Whether the given parameterization belongs in the search space.

Checks that the given parameter values have the same name/type as search space parameters, are contained in the search space domain, and satisfy the parameter constraints.

Parameters
  • parameterization (Dict[str, Union[str, bool, float, int, None]]) – Dict from parameter name to value to validate.

  • raise_error (bool) – If true parameterization does not belong, raises an error with detailed explanation of why.

Return type

bool

Returns

Whether the parameterization is contained in the search space.

check_types(parameterization, allow_none=True, raise_error=False)[source]

Checks that the given parameterization’s types match the search space.

Checks that the names of the parameterization match those specified in the search space, and the given values are of the correct type.

Parameters
  • parameterization (Dict[str, Union[str, bool, float, int, None]]) – Dict from parameter name to value to validate.

  • allow_none (bool) – Whether None is a valid parameter value.

  • raise_error (bool) – If true and parameterization does not belong, raises an error with detailed explanation of why.

Return type

bool

Returns

Whether the parameterization has valid types.

construct_arm(parameters=None, name=None)[source]

Construct new arm using given parameters and name. Any missing parameters fallback to the experiment defaults, represented as None

Return type

Arm

out_of_design_arm()[source]

Create a default out-of-design arm.

An out of design arm contains values for some parameters which are outside of the search space. In the modeling conversion, these parameters are all stripped down to an empty dictionary, since the point is already outside of the modeled space.

Return type

Arm

Returns

New arm w/ null parameter values.

class ax.SimpleExperiment(search_space, name=None, objective_name=None, evaluation_function=<function unimplemented_evaluation_function>, minimize=False, outcome_constraints=None, status_quo=None)[source]

Simplified experiment class with defaults.

Parameters
add_tracking_metric(metric)[source]

Add a new metric to the experiment.

Parameters

metric (Metric) – Metric to be added.

Return type

SimpleExperiment

eval()[source]

Evaluate all arms in the experiment with the evaluation function passed as argument to this SimpleExperiment.

Return type

Data

eval_trial(trial)[source]

Evaluate trial arms with the evaluation function of this experiment.

Parameters

trial (BaseTrial) – trial, whose arms to evaluate.

Return type

Data

property evaluation_function

Get the evaluation function.

Return type

Callable[[Dict[str, Union[str, bool, float, int, None]], Optional[float]], Union[Dict[str, Tuple[float, Optional[float]]], Tuple[float, Optional[float]], float, List[Tuple[Dict[str, Union[str, bool, float, int, None]], Dict[str, Tuple[float, Optional[float]]]]]]]

fetch_data(metrics=None, **kwargs)[source]

Fetches data for all metrics and trials on this experiment.

Parameters
  • metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment.

  • kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions.

Return type

Data

Returns

Data for the experiment.

property has_evaluation_function

Whether this SimpleExperiment has a valid evaluation function attached.

Return type

bool

property is_simple_experiment

Whether this experiment is a regular Experiment or the subclassing SimpleExperiment.

update_tracking_metric(metric)[source]

Redefine a metric that already exists on the experiment.

Parameters

metric (Metric) – New metric definition.

Return type

SimpleExperiment

class ax.SumConstraint(parameters, is_upper_bound, bound)[source]

Constraint on the sum of parameters being greater or less than a bound.

clone()[source]

Clone.

To use the same constraint, we need to reconstruct the original bound. We do this by re-applying the original bound weighting.

Return type

SumConstraint

clone_with_transformed_parameters(transformed_parameters)[source]

Clone, but replace parameters with transformed versions.

Return type

SumConstraint

property constraint_dict

Weights on parameters for linear constraint representation.

Return type

Dict[str, float]

property op

Whether the sum is constrained by a <= or >= inequality.

Return type

ComparisonOp

property parameters

Parameters.

Return type

List[Parameter]

class ax.Trial(experiment, generator_run=None, trial_type=None, ttl_seconds=None)[source]

Trial that only has one attached arm and no arm weights.

Parameters
  • experiment (Experiment) – Experiment, to which this trial is attached.

  • generator_run (Optional[GeneratorRun]) – GeneratorRun, associated with this trial. Trial has only one generator run (of just one arm) attached to it. This can also be set later through add_arm or add_generator_run, but a trial’s associated genetor run is immutable once set.

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

property abandoned_arms

Abandoned arms attached to this trial.

Return type

List[Arm]

property arm

The arm associated with this batch.

Return type

Optional[Arm]

property arms

All arms attached to this trial.

Returns

list of a single arm

attached to this trial if there is one, else None.

Return type

arms

property arms_by_name

Dictionary of all arms attached to this trial with their names as keys.

Returns

dictionary of a single

arm name to arm if one is attached to this trial, else None.

Return type

arms

property generator_run

Generator run attached to this trial.

Return type

Optional[GeneratorRun]

get_metric_mean(metric_name)[source]

Metric mean for the arm attached to this trial, retrieved from the latest data available for the metric for the trial.

Return type

float

property objective_mean

Objective mean for the arm attached to this trial, retrieved from the latest data available for the objective for the trial.

Note: the retrieved objective is the experiment-level objective at the time of the call to objective_mean, which is not necessarily the objective that was set at the time the trial was created or ran.

Return type

float

ax.optimize(parameters, evaluation_function, experiment_name=None, objective_name=None, minimize=False, parameter_constraints=None, outcome_constraints=None, total_trials=20, arms_per_trial=1, random_seed=None, generation_strategy=None)[source]

Construct and run a full optimization loop.

Return type

Tuple[Dict[str, Union[str, bool, float, int, None]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]], Experiment, Optional[ModelBridge]]

ax.save(experiment, filepath)

Save experiment to file.

  1. Convert Ax experiment to JSON-serializable dictionary.

  2. Write to file.

Return type

None

ax.load(filepath)

Load experiment from file.

  1. Read file.

  2. Convert dictionary to Ax experiment instance.

Return type

Experiment