# ax.core¶

## Core Classes¶

### Arm¶

class ax.core.arm.Arm(parameters, name=None)[source]

Base class for defining arms.

Randomization in experiments assigns units to a given arm. Thus, the arm encapsulates the parametrization needed by the unit.

clone(clear_name=False)[source]

Create a copy of this arm.

Parameters: clear_name (bool) – whether this cloned copy should set its name to None instead of the name of the arm being cloned. Defaults to False. Arm
has_name

Return true if arm’s name is not None.

Return type: bool
static md5hash(parameters)[source]

Return unique identifier for arm’s parameters.

Parameters: parameters (Dict[str, Union[str, bool, float, int, None]]) – Parameterization; mapping of param name to value. str Hash of arm’s parameters.
name

Get arm name. Throws if name is None.

Return type: str
name_or_short_signature

Returns arm name if exists; else last 4 characters of the hash.

Used for presentation of candidates (e.g. plotting and tables), where the candidates do not yet have names (since names are automatically set upon addition to a trial).

Return type: str
parameters

Get mapping from parameter names to values.

Return type: Dict[str, Union[str, bool, float, int, None]]
signature

Get unique representation of a arm.

Return type: str

### Base¶

class ax.core.base.Base[source]

Bases: object

Metaclass for core Ax classes.

### BaseTrial¶

class ax.core.base_trial.BaseTrial(experiment, trial_type=None)[source]

Base class for representing trials.

Trials are containers for arms that are deployed together. There are two types of trials: regular Trial, which only contains a single arm, and BatchTrial, which contains an arbitrary number of arms.

abandoned_arms

All abandoned arms, associated with this trial.

Return type: List[Arm]
abandoned_reason
Return type: Optional[str]
arms
Return type: List[Arm]
arms_by_name
Return type: Dict[str, Arm]
assign_runner()[source]

Assigns default experiment runner if trial doesn’t already have one.

Return type: BaseTrial
deployed_name

Name of the experiment created in external framework.

This property is derived from the name field in run_metadata.

Return type: Optional[str]
experiment

The experiment this trial belongs to.

Return type: Experiment
fetch_data(metrics=None, **kwargs)[source]

Fetch data for this trial for all metrics on experiment.

Parameters: trial_index – The index of the trial to fetch data for. metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment. kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions. Data Data for this trial.
index

The index of this trial within the experiment’s trial list.

Return type: int
is_abandoned

Whether this trial is abandoned.

Return type: bool
mark_abandoned(reason=None)[source]

Mark trial as abandoned.

Parameters: abandoned_reason – The reason the trial was abandoned. BaseTrial The trial instance.
mark_completed()[source]

Mark trial as completed.

Return type: BaseTrial The trial instance.
mark_dispatched()[source]

Mark trial as dispatched through the service API to await completion.

Return type: BaseTrial The trial instance.
mark_failed()[source]

Mark trial as failed.

Return type: BaseTrial The trial instance.
mark_running()[source]

Mark trial has started running.

Return type: BaseTrial The trial instance.
mark_staged()[source]

Mark the trial as being staged for running.

Return type: BaseTrial The trial instance.
run()[source]

Deploys the trial according to the behavior on the runner.

The runner returns a run_metadata dict containining metadata of the deployment process. It also returns a deployed_name of the trial within the system to which it was deployed. Both these fields are set on the trial.

Return type: BaseTrial The trial instance.
run_metadata

Dict containing metadata from the deployment process.

This is set implicitly during trial.run().

Return type: Dict[str, Any]
runner

The runner object defining how to deploy the trial.

Return type: Optional[Runner]
status

The status of the trial in the experimentation lifecycle.

Return type: TrialStatus
time_completed

Completion time of the trial.

Return type: Optional[datetime]
time_created

Creation time of the trial.

Return type: datetime
time_run_started

Time the trial was started running (i.e. collecting data).

Return type: Optional[datetime]
time_staged

Staged time of the trial.

Return type: Optional[datetime]
trial_type

The type of the trial.

Relevant for experiments containing different kinds of trials (e.g. different deployment types).

Return type: Optional[str]
class ax.core.base_trial.TrialStatus[source]

Bases: enum.Enum

Enum of trial status.

General lifecycle of a trial is::

CANDIDATE --> STAGED --> RUNNING --> COMPLETED
------------->         --> FAILED (machine failure)
-------------------------> ABANDONED (human-initiated action)
--> DISPATCHED ---------->


Trials may be abandoned at any time prior to completion or failure via human intervention. The difference between abandonment and failure is that the former is human-directed, while the latter is an internal failure state.

Additionally, when trials are deployed, they may be in an intermediate staged state (e.g. scheduled but waiting for resources) or immediately transition to running.

When used though the service API, Ax proposes trials and expects the client application to compelete them with evaluation data when available. In this case, a trial is set to ‘dispatched’ right after the trial is created, and when user completes the trial with data, its status is set to ‘completed’.

ABANDONED = 5
CANDIDATE = 0
COMPLETED = 3
DISPATCHED = 6
FAILED = 2
RUNNING = 4
STAGED = 1
expecting_data

True if trial is expecting data.

Return type: bool
is_deployed

True if trial has been deployed but not completed.

Return type: bool
is_failed

True if this trial is a failed one.

Return type: bool
is_terminal

True if trial is completed.

Return type: bool
ax.core.base_trial.immutable_once_run(func)[source]

Decorator for methods that should throw Error when trial is running or has ever run and immutable.

Return type: Callable

### BatchTrial¶

class ax.core.batch_trial.AbandonedArm[source]

Bases: tuple

Tuple storing metadata of arm that has been abandoned within a BatchTrial.

name

Alias for field number 0

reason

Alias for field number 2

time

Alias for field number 1

class ax.core.batch_trial.BatchTrial(experiment, generator_run=None, trial_type=None)[source]
abandoned_arms

List of arms that have been abandoned within this trial

Return type: List[Arm]
abandoned_arms_metadata
Return type: List[AbandonedArm]
add_arm(*args, **kwargs)
add_arms_and_weights(*args, **kwargs)
add_generator_run(*args, **kwargs)
arm_weights

The set of arms and associated weights for the trial.

These are constructed by merging the arms and weights from each generator run that is attached to the trial.

Return type: Optional[Mutablemapping[Arm, float]]
arms

All arms contained in the trial.

Return type: List[Arm]
arms_by_name

Map from arm name to object for all arms in trial.

Return type: Dict[str, Arm]
clone()[source]

Clone the trial.

Return type: BatchTrial A new instance of the trial.
experiment

The experiment this batch belongs to.

Return type: Experiment
generator_run_structs

List of generator run structs attached to this trial.

Struct holds generator_run object and the weight with which it was added.

Return type: List[GeneratorRunStruct]
index

The index of this batch within the experiment’s batch list.

Return type: int
is_factorial

Return true if the trial’s arms are a factorial design with no linked factors.

Return type: bool
mark_arm_abandoned(arm_name, reason=None)[source]

Mark a arm abandoned.

Usually done after deployment when one arm causes issues but user wants to continue running other arms in the batch.

Parameters: arm_name (str) – The name of the arm to abandon. reason (Optional[str]) – The reason for abandoning the arm. BatchTrial The batch instance.
normalized_arm_weights(total=1, trunc_digits=None)[source]

Returns arms with a new set of weights normalized to the given total.

This method is useful for many runners where we need to normalize weights to a certain total without mutating the weights attached to a trial.

Parameters: total (float) – The total weight to which to normalize. Default is 1, in which case arm weights can be interpreted as probabilities. trunc_digits (Optional[int]) – The number of digits to keep. If the resulting total weight is not equal to total, re-allocate weight in such a way to maintain relative weights as best as possible. Mutablemapping[Arm, float] Mapping from arms to the new set of weights.
reweight_status_quo(*args, **kwargs)
run()[source]

Deploys the trial according to the behavior on the runner.

The runner returns a run_metadata dict containining metadata of the deployment process. It also returns a deployed_name of the trial within the system to which it was deployed. Both these fields are set on the trial.

Return type: BatchTrial The trial instance.
set_status_quo_with_weight(*args, **kwargs)
status_quo

The control arm for this batch.

Return type: Optional[Arm]
weights

Weights corresponding to arms contained in the trial.

Return type: List[float]
class ax.core.batch_trial.GeneratorRunStruct[source]

Bases: tuple

Stores GeneratorRun object as well as the weight with which it was added.

generator_run

Alias for field number 0

weight

Alias for field number 1

### Data¶

class ax.core.data.Data(df=None, description=None)[source]

Class storing data for an experiment.

The dataframe is retrieved via the df property. The data can be stored to gluster for future use by attaching it to an experiment using experiment.add_data() (this requires a description to be set.)

df

DataFrame with underlying data, and required columns.

description

Human-readable description of data.

static column_data_types()[source]

Type specification for all supported columns.

Return type: Dict[str, Type[+CT_co]]
df
Return type: DataFrame
df_hash

Compute hash of pandas DataFrame.

This first serializes the DataFrame and computes the md5 hash on the resulting string. Note that this may cause performance issue for very large DataFrames.

Parameters: df – The DataFrame for which to compute the hash.
Returns
str: The hash of the DataFrame.
Return type: str
static from_evaluations(evaluations, trial_index)[source]

Convert dict of evaluations to Ax data object.

Parameters: evaluations (Dict[str, Dict[str, Tuple[float, float]]]) – Map from condition name to metric outcomes. Data Ax Data object.
static from_multiple_data(data)[source]
Return type: Data
static required_columns()[source]

Names of required columns.

Return type: Set[str]
ax.core.data.custom_data_class(column_data_types=None, required_columns=None, time_columns=None)[source]

Creates a custom data class with additional columns.

All columns and their designations on the base data class are preserved, the inputs here are appended to the definitions on the base class.

Parameters: column_data_types (Optional[Dict[str, Type[+CT_co]]]) – Dict from column name to column type. required_columns (Optional[Set[str]]) – Set of additional columns reqiured for this data object. time_columns (Optional[Set[str]]) – Set of additional columns to cast to timestamp. New data subclass with amended column definitions.
ax.core.data.set_single_trial(data)[source]

Returns a new Data object where we set all rows to have the same trial index (i.e. 0). This is meant to be used with our IVW transform, which will combine multiple observations of the same metric.

Return type: Data

### Experiment¶

class ax.core.experiment.Experiment(search_space, name=None, optimization_config=None, tracking_metrics=None, runner=None, status_quo=None, description=None, is_test=False)[source]

Base class for defining an experiment.

add_tracking_metric(metric)[source]

Add a new metric to the experiment.

Parameters: metric (Metric) – Metric to be added. Experiment
arms_by_name

The arms belonging to this experiment, by their name.

Return type: Dict[str, Arm]
arms_by_signature

The arms belonging to this experiment, by their signature.

Return type: Dict[str, Arm]
attach_data(data)[source]

Attach data to experiment.

Parameters: data (Data) – Data object to store. int Timestamp of storage in millis.
data_by_trial

Data stored on the experiment, indexed by trial index and storage time.

First key is trial index and second key is storage time in milliseconds. For a given trial, data is ordered by storage time, so first added data will appear first in the list.

Return type: Dict[int, OrderedDict]
default_trial_type

Default trial type assigned to trials in this experiment.

In the base experiment class this is always None. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type: Optional[str]
experiment_type

The type of the experiment.

Return type: Optional[str]
fetch_data(metrics=None, **kwargs)[source]

Fetches data for all metrics and trials on this experiment.

Parameters: metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment. kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions. Data Data for the experiment.
has_name

Return true if experiment’s name is not None.

Return type: bool
is_simple_experiment

Whether this experiment is a regular Experiment or the subclassing SimpleExperiment.

lookup_data_for_trial(trial_index)[source]

Lookup stored data for a specific trial.

Returns latest data object present for this trial. Returns empty data if no data present.

Parameters: trial_index (int) – The index of the trial to lookup data for. Data Requested data object.
lookup_data_for_ts(timestamp)[source]

Collect data for all trials stored at this timestamp.

Useful when many trials’ data was fetched and stored simultaneously and user wants to retrieve same collection of data later.

Can also be used to lookup specific data for a single trial when storage time is known.

Parameters: timestamp (int) – Timestamp in millis at which data was stored. Data Data object with all data stored at the timestamp.
metrics

The metrics attached to the experiment.

Return type: Dict[str, Metric]
name

Get experiment name. Throws if name is None.

Return type: str
new_batch_trial(generator_run=None, trial_type=None)[source]

Create a new batch trial associated with this experiment.

Return type: BatchTrial
new_trial(generator_run=None, trial_type=None)[source]

Create a new trial associated with this experiment.

Return type: Trial
num_abandoned_arms

How many arms attached to this experiment are abandoned.

Return type: int
num_trials

How many trials are associated with this experiment.

Return type: int
optimization_config

The experiment’s optimization config.

Return type: Optional[OptimizationConfig]
parameters

The parameters in the experiment’s search space.

Return type: Dict[str, Parameter]
remove_tracking_metric(metric_name)[source]

Remove a metric that already exists on the experiment.

Parameters: metric_name (str) – Unique name of metric to remove. Experiment
runner_for_trial(trial)[source]

The default runner to use for a given trial.

In the base experiment class, this is always the default experiment runner. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type: Optional[Runner]
search_space

The search space for this experiment.

When setting a new search space, all parameter names and types must be preserved. However, if no trials have been created, all modifications are allowed.

Return type: SearchSpace
status_quo

The existing arm that new arms will be compared against.

Return type: Optional[Arm]
sum_trial_sizes

Sum of numbers of arms attached to each trial in this experiment.

Return type: int
supports_trial_type(trial_type)[source]

Whether this experiment allows trials of the given type.

The base experiment class only supports None. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type: bool
time_created

Creation time of the experiment.

Return type: datetime
trials

The trials associated with the experiment.

Return type: Dict[int, BaseTrial]
update_tracking_metric(metric)[source]

Redefine a metric that already exists on the experiment.

Parameters: metric (Metric) – New metric definition. Experiment

### GeneratorRun¶

class ax.core.generator_run.ArmWeight[source]

Bases: tuple

NamedTuple for tying together arms and weights.

arm

Alias for field number 0

weight

Alias for field number 1

class ax.core.generator_run.GeneratorRun(arms, weights=None, optimization_config=None, search_space=None, model_predictions=None, best_arm_predictions=None, type=None, fit_time=None, gen_time=None)[source]

An object that represents a single run of a generator.

This object is created each time the gen method of a generator is called. It stores the arms and (optionally) weights that were generated by the run. When we add a generator run to a trial, its arms and weights will be merged with those from previous generator runs that were already attached to the trial.

arm_weights

Mapping from arms to weights (order matches order in arms property).

Return type: Mutablemapping[Arm, float]
arms

Returns arms generated by this run.

Return type: List[Arm]
best_arm_predictions
Return type: Optional[Tuple[Arm, Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]]
clone()[source]

Return a deep copy of a GeneratorRun.

Return type: GeneratorRun
fit_time
Return type: Optional[float]
gen_time
Return type: Optional[float]
generator_run_type

The type of the generator run.

Return type: Optional[str]
index

The index of this generator run within a trial’s list of generator run structs. This field is set when the generator run is added to a trial.

Return type: Optional[int]
model_predictions
Return type: Optional[Tuple[Dict[str, List[float]], Dict[str, Dict[str, List[float]]]]]
model_predictions_by_arm
Return type: Optional[Dict[str, Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]
optimization_config

The optimization config used during generation of this run.

Return type: Optional[OptimizationConfig]
param_df

Constructs a Pandas dataframe with the parameter values for each arm.

Useful for inspecting the contents of a generator run.

Returns: a dataframe with the generator run’s arms. pd.DataFrame
search_space

The search used during generation of this run.

Return type: Optional[SearchSpace]
time_created

Creation time of the batch.

Return type: datetime
weights

Returns weights associated with arms generated by this run.

Return type: List[float]
class ax.core.generator_run.GeneratorRunType[source]

Bases: enum.Enum

Class for enumerating generator run types.

MANUAL = 1
STATUS_QUO = 0
ax.core.generator_run.extract_arm_predictions(model_predictions, arm_idx)[source]

Extract a particular arm from model_predictions.

Parameters: model_predictions (Tuple[Dict[str, List[float]], Dict[str, Dict[str, List[float]]]]) – Mean and Cov for all arms. arm_idx (int) – Index of arm in prediction list. (mean, cov) for specified arm.

### Metric¶

class ax.core.metric.Metric(name, lower_is_better=None)[source]

Base class for representing metrics.

lower_is_better

Flag for metrics which should be minimized.

clone()[source]

Create a copy of this Metric.

Return type: Metric
fetch_experiment_data(experiment, **kwargs)[source]

Fetch this metric’s data for an experiment.

Default behavior is to fetch data from all trials expecting data and concatenate the results.

Return type: Data
classmethod fetch_experiment_data_multi(experiment, metrics, **kwargs)[source]

Fetch multiple metrics data for an experiment.

Default behavior calls fetch_experiment_data for each metric. Subclasses should override this to batch data computation for multiple metrics.

Return type: Data
fetch_trial_data(trial, **kwargs)[source]

Fetch data for one trial.

Return type: Data
classmethod fetch_trial_data_multi(trial, metrics, **kwargs)[source]

Fetch multiple metrics data for one trial.

Default behavior calls fetch_trial_data for each metric. Subclasses should override this to trial data computation for multiple metrics.

Return type: Data
name

Get name of metric.

Return type: str

### MultiTypeExperiment¶

class ax.core.multi_type_experiment.MultiTypeExperiment(name, search_space, default_trial_type, default_runner, optimization_config=None, status_quo=None, description=None)[source]

Class for experiment with multiple trial types.

A canonical use case for this is tuning a large production system with limited evaluation budget and a simulator which approximates evaluations on the main system. Trial deployment and data fetching is separate for the two systems, but the final data is combined and fed into multi-task models.

See the Multi-Task Modeling tutorial for more details.

name

Name of the experiment.

description

Description of the experiment.

add_tracking_metric(metric, trial_type, canonical_name=None)[source]

Add a new metric to the experiment.

Parameters: metric (Metric) – The metric to add. trial_type (str) – The trial type for which this metric is used. canonical_name (Optional[str]) – The default metric for which this metric is a proxy. MultiTypeExperiment
add_trial_type(trial_type, runner)[source]

Add a new trial_type to be supported by this experiment.

Parameters: trial_type (str) – The new trial_type to be added. runner (Runner) – The default runner for trials of this type. MultiTypeExperiment
default_trial_type

Default trial type assigned to trials in this experiment.

Return type: Optional[str]
fetch_data(metrics=None, **kwargs)[source]

Fetches data for all metrics and trials on this experiment.

Parameters: metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment. kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions. Data Data for the experiment.
metric_to_trial_type

Map metrics to trial types.

Adds in default trial type for OC metrics to custom defined trial types..

Return type: Dict[str, str]
remove_tracking_metric(metric_name)[source]

Remove a metric that already exists on the experiment.

Parameters: metric_name (str) – Unique name of metric to remove. MultiTypeExperiment
runner_for_trial(trial)[source]

The default runner to use for a given trial.

Looks up the appropriate runner for this trial type in the trial_type_to_runner.

Return type: Optional[Runner]
supports_trial_type(trial_type)[source]

Whether this experiment allows trials of the given type.

Only trial types defined in the trial_type_to_runner are allowed.

Return type: bool
update_runner(trial_type, runner)[source]

Update the default runner for an existing trial_type.

Parameters: trial_type (str) – The new trial_type to be added. runner (Runner) – The new runner for trials of this type. MultiTypeExperiment
update_tracking_metric(metric, trial_type, canonical_name=None)[source]

Update an existing metric on the experiment.

Parameters: metric (Metric) – The metric to add. trial_type (str) – The trial type for which this metric is used. canonical_name (Optional[str]) – The default metric for which this metric is a proxy. MultiTypeExperiment

### Objective¶

class ax.core.objective.Objective(metric, minimize=False)[source]

Base class for representing an objective.

minimize

If True, minimize metric.

clone()[source]

Create a copy of the objective.

Return type: Objective
metric

Get the objective metric.

Return type: Metric
metrics

Get a list of objective metrics.

Return type: List[Metric]
class ax.core.objective.ScalarizedObjective(metrics, weights=None, minimize=False)[source]

Class for an objective composed of a linear scalarization of metrics.

metrics

List of metrics.

weights

Weights for scalarization; default to 1.

clone()[source]

Create a copy of the objective.

Return type: Objective
metric

Override base method to error.

Return type: Metric
metric_weights

Get the objective metrics and weights.

Return type: Iterable[Tuple[Metric, float]]
metrics

Get the objective metrics.

Return type: List[Metric]

### Observation¶

class ax.core.observation.Observation(features, data, arm_name=None)[source]

Represents an observation.

A set of features (ObservationFeatures) and corresponding measurements (ObservationData). Optionally, a arm name associated with the features.

features
Type: ObservationFeatures
data
Type: ObservationData
arm_name
Type: Optional[str]
class ax.core.observation.ObservationData(metric_names, means, covariance)[source]

Outcomes observed at a point.

The “point” corresponding to this ObservationData would be an ObservationFeatures object.

metric_names

A list of k metric names that were observed

means

a k-array of observed means

covariance

a (k x k) array of observed covariances

class ax.core.observation.ObservationFeatures(parameters, trial_index=None, start_time=None, end_time=None, random_split=None)[source]

The features of an observation.

These include both the arm parameters and the features of the observation found in the Data object: trial index, times, and random split. This object is meant to contain everything needed to represent this observation in a model feature space. It is essentially a row of Data joined with the arm parameters.

An ObservationFeatures object would typically have a corresponding ObservationData object that provides the observed outcomes.

parameters

arm parameters

trial_index

trial index

start_time

batch start time

end_time

batch end time

random_split

random split

static from_arm(arm, trial_index=None, start_time=None, end_time=None, random_split=None)[source]

Convert a Arm to an ObservationFeatures, including additional data as specified.

Return type: ObservationFeatures
ax.core.observation.observations_from_data(experiment, data)[source]

Convert Data to observations.

Converts a Data object to a list of Observation objects. Pulls arm parameters from experiment.

Uses a diagonal covariance matrix across metric_names.

Parameters: experiment (Experiment) – Experiment with arm parameters. data (Data) – Data of observations

Returns: List of Observation objects.

Return type: List[Observation]

### OptimizationConfig¶

class ax.core.optimization_config.OptimizationConfig(objective, outcome_constraints=None)[source]

An optimization configuration, which comprises an objective and outcome constraints.

There is no minimum or maximum number of outcome constraints, but an individual metric can have at most two constraints–which is how we represent metrics with both upper and lower bounds.

clone()[source]

Make a copy of this optimization config.

Return type: OptimizationConfig
metrics
Return type: Dict[str, Metric]
objective

Get objective.

Return type: Objective
outcome_constraints

Get outcome constraints.

Return type: List[OutcomeConstraint]

### OutcomeConstraint¶

class ax.core.outcome_constraint.OutcomeConstraint(metric, op, bound, relative=True)[source]

Base class for representing outcome constraints.

Outcome constraints may of the form metric >= bound or metric <= bound, where the bound can be expressed as an absolute measurement or relative to the status quo (if applicable).

metric

Metric to constrain.

op

Specifies whether metric should be greater or equal to, or less than or equal to, some bound.

bound

The bound in the constraint.

relative

Whether you want to bound on an absolute or relative scale. If relative, bound is the acceptable percent change.

clone()[source]

Create a copy of this OutcomeConstraint.

Return type: OutcomeConstraint
metric
Return type: Metric
op
Return type: ComparisonOp

### Parameter¶

class ax.core.parameter.ChoiceParameter(name, parameter_type, values, is_ordered=False, is_task=False, is_fidelity=False)[source]

Parameter object that specifies a discrete set of values.

add_values(values)[source]

Add input list to the set of allowed values for parameter.

Cast all input values to the parameter type.

Parameters: values (List[Union[str, bool, float, int, None]]) – Values being added to the allowed list. ChoiceParameter
clone()[source]
Return type: ChoiceParameter
is_ordered
Return type: bool
is_task
Return type: bool
name
Return type: str
parameter_type
Return type: ParameterType
set_values(values)[source]

Set the list of allowed values for parameter.

Cast all input values to the parameter type.

Parameters: values (List[Union[str, bool, float, int, None]]) – New list of allowed values. ChoiceParameter
validate(value)[source]

Checks that the input is in the list of allowed values.

Parameters: value (Union[str, bool, float, int, None]) – Value being checked. bool True if valid, False otherwise.
values
Return type: List[Union[str, bool, float, int, None]]
class ax.core.parameter.FixedParameter(name, parameter_type, value, is_fidelity=False)[source]

Parameter object that specifies a single fixed value.

clone()[source]
Return type: FixedParameter
name
Return type: str
parameter_type
Return type: ParameterType
set_value(value)[source]
Return type: FixedParameter
validate(value)[source]

Checks that the input is equal to the fixed value.

Parameters: value (Union[str, bool, float, int, None]) – Value being checked. bool True if valid, False otherwise.
value
Return type: Union[str, bool, float, int, None]
class ax.core.parameter.Parameter[source]
clone()[source]
Return type: Parameter
is_fidelity
Return type: bool
is_numeric
Return type: bool
is_valid_type(value)[source]

Whether a given value’s type is allowed by this parameter.

Return type: bool
name
Return type: str
parameter_type
Return type: ParameterType
python_type

The python type for the corresponding ParameterType enum.

Used primarily for casting values of unknown type to conform to that of the parameter.

validate(value)[source]
Return type: bool
class ax.core.parameter.ParameterType[source]

Bases: enum.Enum

An enumeration.

BOOL = 0
FLOAT = 2
INT = 1
STRING = 3
is_numeric
Return type: bool
class ax.core.parameter.RangeParameter(name, parameter_type, lower, upper, log_scale=False, digits=None, is_fidelity=False)[source]

Parameter object that specifies a continuous numerical range of values.

clone()[source]
Return type: RangeParameter
digits

Number of digits to round values to for float type.

Upper and lower bound are re-cast after this property is changed.

Return type: Optional[int]
is_valid_type(value)[source]

Same as default except allows floats whose value is an int for Int parameters.

Return type: bool
log_scale

Whether to sample in log space when drawing random values of the parameter.

Return type: bool
lower

Lower bound of the parameter range.

Value is cast to parameter type upon set and also validated to ensure the bound is strictly less than upper bound.

Return type: float
name
Return type: str
parameter_type
Return type: ParameterType
set_digits(digits)[source]
Return type: RangeParameter
set_log_scale(log_scale)[source]
Return type: RangeParameter
update_range(lower=None, upper=None)[source]

Set the range to the given values.

If lower or upper is not provided, it will be left at its current value.

Parameters: lower (Optional[float]) – New value for the lower bound. upper (Optional[float]) – New value for the upper bound. RangeParameter
upper

Upper bound of the parameter range.

Value is cast to parameter type upon set and also validated to ensure the bound is strictly greater than lower bound.

Return type: float
validate(value)[source]

Returns True if input is a valid value for the parameter.

Checks that value is of the right type and within the valid range for the parameter. Returns False if value is None.

Parameters: value (Union[str, bool, float, int, None]) – Value being checked. bool True if valid, False otherwise.

### ParameterConstraint¶

class ax.core.parameter_constraint.OrderConstraint(lower_parameter, upper_parameter)[source]

Constraint object for specifying one parameter to be smaller than another.

clone()[source]

Clone.

Return type: OrderConstraint
constraint_dict

Weights on parameters for linear constraint representation.

Return type: Dict[str, float]
lower_parameter

Parameter with lower value.

Return type: Parameter
parameters

Parameters.

Return type: List[Parameter]
upper_parameter

Parameter with higher value.

Return type: Parameter
class ax.core.parameter_constraint.ParameterConstraint(constraint_dict, bound)[source]

Base class for linear parameter constraints.

Constraints are expressed using a map from parameter name to weight followed by a bound.

The constraint is satisfied if w * v <= b where:
w is the vector of parameter weights. v is a vector of parameter values. b is the specified bound. * is the dot product operator.
bound

Get bound of the inequality of the constraint.

Return type: float
check(parameter_dict)[source]

Whether or not the set of parameter values satisfies the constraint.

Does a weighted sum of the parameter values based on the constraint_dict and checks that the sum is less than the bound.

Parameters: parameter_dict (Dict[str, Union[int, float]]) – Map from parameter name to parameter value. bool Whether the constraint is satisfied.
clone()[source]

Clone.

Return type: ParameterConstraint
constraint_dict

Get mapping from parameter names to weights.

Return type: Dict[str, float]
class ax.core.parameter_constraint.SumConstraint(parameters, is_upper_bound, bound)[source]

Constraint on the sum of parameters being greater or less than a bound.

clone()[source]

Clone.

Return type: SumConstraint
constraint_dict

Weights on parameters for linear constraint representation.

Return type: Dict[str, float]
op

Whether the sum is constrained by a <= or >= inequality.

Return type: ComparisonOp
parameters

Parameters.

Return type: List[Parameter]
ax.core.parameter_constraint.validate_constraint_parameters(parameters)[source]

Basic validation of parameters used in a constraint.

Parameters: parameters (List[Parameter]) – Parameters used in constraint. ValueError if the parameters are not valid for use. None

### Runner¶

class ax.core.runner.Runner[source]

Abstract base class for custom runner classes

run(trial)[source]

Deploys a trial based on custom runner subclass implementation.

Parameters: trial (BaseTrial) – The trial to deploy. Dict[str, Any] Dict of run metadata from the deployment process.
staging_required

Whether the trial goes to staged or running state once deployed.

Return type: bool

### SearchSpace¶

class ax.core.search_space.SearchSpace(parameters, parameter_constraints=None)[source]

Base object for SearchSpace object.

Contains a set of Parameter objects, each of which have a name, type, and set of valid values. The search space also contains a set of ParameterConstraint objects, which can be used to define restrictions across parameters (e.g. p_a < p_b).

add_parameter(parameter)[source]
Return type: None
add_parameter_constraints(parameter_constraints)[source]
Return type: None
cast_arm(arm)[source]

Cast parameterization of given arm to the types in this SearchSpace.

For each parameter in given arm, cast it to the proper type specified in this search space. Throws if there is a mismatch in parameter names. This is mostly useful for int/float, which user can be sloppy with when hand written.

Parameters: arm (Arm) – Arm to cast. Arm New casted arm.
check_membership(parameterization, raise_error=False)[source]

Whether the given parameterization belongs in the search space.

Checks that the given parameter values have the same name/type as search space parameters, are contained in the search space domain, and satisfy the parameter constraints.

Parameters: parameterization (Dict[str, Union[str, bool, float, int, None]]) – Dict from parameter name to value to validate. raise_error (bool) – If true parameterization does not belong, raises an error with detailed explanation of why. bool Whether the parameterization is contained in the search space.
check_types(parameterization, allow_none=True, raise_error=False)[source]

Checks that the given parameterization’s types match the search space.

Checks that the names of the parameterization match those specified in the search space, and the given values are of the correct type.

Parameters: parameterization (Dict[str, Union[str, bool, float, int, None]]) – Dict from parameter name to value to validate. allow_none (bool) – Whether None is a valid parameter value. raise_error (bool) – If true and parameterization does not belong, raises an error with detailed explanation of why. bool Whether the parameterization has valid types.
clone()[source]
Return type: SearchSpace
out_of_design_arm()[source]

Create a default out-of-design arm.

An out of design arm contains values for some parameters which are outside of the search space. In the modeling conversion, these parameters are all stripped down to an empty dictionary, since the point is already outside of the modeled space.

Return type: Arm New arm w/ null parameter values.
parameter_constraints
Return type: List[ParameterConstraint]
parameters
Return type: Dict[str, Parameter]
set_parameter_constraints(parameter_constraints)[source]
Return type: None
tunable_parameters
Return type: Dict[str, Parameter]
update_parameter(parameter)[source]
Return type: None

### SimpleExperiment¶

class ax.core.simple_experiment.SimpleExperiment(search_space, name=None, objective_name=None, evaluation_function=<function unimplemented_evaluation_function>, minimize=False, outcome_constraints=None, status_quo=None)[source]

Simplified experiment class with defaults.

Parameters: search_space (SearchSpace) – parameter space name (Optional[str]) – name of this experiment objective_name (Optional[str]) – which of the metrics computed by the evaluation function is the objective evaluation_function (Callable[[Dict[str, Union[str, bool, float, int, None]], Optional[float]], Union[Dict[str, Tuple[float, float]], Tuple[float, float], float]]) – function that evaluates mean and standard error for a parameter configuration. This function should accept a dictionary of parameter names to parameter values (TParametrization) and optionally a weight, and return a dictionary of metric names to a tuple of means and standard errors (TEvaluationOutcome). The function can also return a single tuple, in which case we assume the metric is the objective. minimize (bool) – whether the objective should be minimized, defaults to False outcome_constraints (Optional[List[OutcomeConstraint]]) – constraints on the outcome, if any status_quo (Optional[Arm]) – Arm representing existing “control” arm
add_tracking_metric(metric)[source]

Add a new metric to the experiment.

Parameters: metric (Metric) – Metric to be added. SimpleExperiment
eval()[source]

Evaluate all arms in the experiment with the evaluation function passed as argument to this SimpleExperiment.

Return type: Data
eval_trial(trial)[source]

Evaluate trial arms with the evaluation function of this experiment.

Parameters: trial (BaseTrial) – trial, whose arms to evaluate. Data
evaluation_function

Get the evaluation function.

Return type: Callable[[Dict[str, Union[str, bool, float, int, None]], Optional[float]], Union[Dict[str, Tuple[float, float]], Tuple[float, float], float]]
evaluation_function_outer(parameterization, weight=None)[source]
Return type: Dict[str, Tuple[float, float]]
fetch_data(metrics=None, **kwargs)[source]

Fetches data for all metrics and trials on this experiment.

Parameters: metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment. kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions. Data Data for the experiment.
has_evaluation_function

Whether this SimpleExperiment has a valid evaluation function attached.

Return type: bool
is_simple_experiment

Whether this experiment is a regular Experiment or the subclassing SimpleExperiment.

update_tracking_metric(metric)[source]

Redefine a metric that already exists on the experiment.

Parameters: metric (Metric) – New metric definition. SimpleExperiment
ax.core.simple_experiment.unimplemented_evaluation_function(parameterization, weight=None)[source]

Default evaluation function used if none is provided during initialization. The evaluation function must be manually set before use.

Return type: Union[Dict[str, Tuple[float, float]], Tuple[float, float], float]

### Trial¶

class ax.core.trial.Trial(experiment, generator_run=None, trial_type=None)[source]

Trial that only has one attached arm and no arm weights.

Parameters: experiment (Experiment) – experiment, to which this trial is attached generator_run (Optional[GeneratorRun]) – generator_run associated with this trial. Trial has only one generator run (and thus arm) attached to it. This can also be set later through add_arm or add_generator_run, but a trial’s associated genetor run is immutable once set.
abandoned_arms

Abandoned arms attached to this trial.

Return type: List[Arm]
add_arm(*args, **kwargs)
add_generator_run(*args, **kwargs)
arm

The arm associated with this batch.

Return type: Optional[Arm]
arms

All arms attached to this trial.

Returns: list of a single arm attached to this trial if there is one, else None. arms
arms_by_name

Dictionary of all arms attached to this trial with their names as keys.

Returns: dictionary of a single arm name to arm if one is attached to this trial, else None. arms
generator_run

Generator run attached to this trial.

Return type: Optional[GeneratorRun]
objective_mean

Objective mean for the arm attached to this trial.

Return type: Optional[float]

## Core Types¶

class ax.core.types.ComparisonOp[source]

Bases: enum.Enum

Class for enumerating comparison operations.

GEQ = 0
LEQ = 1
ax.core.types.merge_model_predict(predict, predict_append)[source]

Append model predictions to an existing set of model predictions.

TModelPredict is of the form:
{metric_name: [mean1, mean2, …], {metric_name: {metric_name: [var1, var2, …]}})

This will append the predictions

Parameters: predict (Tuple[Dict[str, List[float]], Dict[str, Dict[str, List[float]]]]) – Initial set of predictions. other_predict – Predictions to be appended. TModelPredict with the new predictions appended.