ax.core

Core Classes

Arm

class ax.core.arm.Arm(parameters, name=None)[source]

Bases: ax.utils.common.equality.Base

Base class for defining arms.

Randomization in experiments assigns units to a given arm. Thus, the arm encapsulates the parametrization needed by the unit.

clone(clear_name=False)[source]

Create a copy of this arm.

Parameters

clear_name (bool) – whether this cloned copy should set its name to None instead of the name of the arm being cloned. Defaults to False.

Return type

Arm

property has_name

Return true if arm’s name is not None.

Return type

bool

static md5hash(parameters)[source]

Return unique identifier for arm’s parameters.

Parameters

parameters (Dict[str, Union[str, bool, float, int, None]]) – Parameterization; mapping of param name to value.

Return type

str

Returns

Hash of arm’s parameters.

property name

Get arm name. Throws if name is None.

Return type

str

property name_or_short_signature

Returns arm name if exists; else last 4 characters of the hash.

Used for presentation of candidates (e.g. plotting and tables), where the candidates do not yet have names (since names are automatically set upon addition to a trial).

Return type

str

property parameters

Get mapping from parameter names to values.

Return type

Dict[str, Union[str, bool, float, int, None]]

property signature

Get unique representation of a arm.

Return type

str

BaseTrial

class ax.core.base_trial.BaseTrial(experiment, trial_type=None, ttl_seconds=None, index=None)[source]

Bases: abc.ABC, ax.utils.common.equality.Base

Base class for representing trials.

Trials are containers for arms that are deployed together. There are two kinds of trials: regular Trial, which only contains a single arm, and BatchTrial, which contains an arbitrary number of arms.

Parameters
  • experiment (Experiment) – Experiment, of which this trial is a part

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

  • index (Optional[int]) – If specified, the trial’s index will be set accordingly. This should generally not be specified, as in the index will be automatically determined based on the number of existing trials. This is only used for the purpose of loading from storage.

abstract property abandoned_arms

All abandoned arms, associated with this trial.

Return type

List[Arm]

property abandoned_reason
Return type

Optional[str]

abstract property arms
Return type

List[Arm]

abstract property arms_by_name
Return type

Dict[str, Arm]

assign_runner()[source]

Assigns default experiment runner if trial doesn’t already have one.

Return type

BaseTrial

complete()[source]
Stops the trial if functionality is defined on runner

and marks trial completed.

Return type

BaseTrial

Returns

The trial instance.

property completed_successfully

Checks if trial status is COMPLETED.

Return type

bool

property deployed_name

Name of the experiment created in external framework.

This property is derived from the name field in run_metadata.

Return type

Optional[str]

property did_not_complete

Checks if trial status is terminal, but not COMPLETED.

Return type

bool

property experiment

The experiment this trial belongs to.

Return type

Experiment

fetch_data(metrics=None, **kwargs)[source]

Fetch data for this trial for all metrics on experiment.

Parameters
  • trial_index – The index of the trial to fetch data for.

  • metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment.

  • kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions.

Return type

Data

Returns

Data for this trial.

abstract property generator_runs

All generator runs associated with this trial.

Return type

List[GeneratorRun]

property index

The index of this trial within the experiment’s trial list.

Return type

int

property is_abandoned

Whether this trial is abandoned.

Return type

bool

mark_abandoned(reason=None)[source]

Mark trial as abandoned.

Parameters

abandoned_reason – The reason the trial was abandoned.

Return type

BaseTrial

Returns

The trial instance.

mark_as(status, **kwargs)[source]

Mark trial with a new TrialStatus.

Parameters
  • status (TrialStatus) – The new status of the trial.

  • kwargs (Any) – Additional keyword args, as can be ued in the respective mark_ methods associated with the trial status.

Return type

BaseTrial

Returns

The trial instance.

mark_completed()[source]

Mark trial as completed.

Parameters

allow_repeat_completion – If set to True, this function will not raise an error is a trial that has already been marked as completed is being marked as completed again.

Return type

BaseTrial

Returns

The trial instance.

mark_failed()[source]

Mark trial as failed.

Return type

BaseTrial

Returns

The trial instance.

mark_running(no_runner_required=False)[source]

Mark trial has started running.

Return type

BaseTrial

Returns

The trial instance.

mark_staged()[source]

Mark the trial as being staged for running.

Return type

BaseTrial

Returns

The trial instance.

run()[source]

Deploys the trial according to the behavior on the runner.

The runner returns a run_metadata dict containining metadata of the deployment process. It also returns a deployed_name of the trial within the system to which it was deployed. Both these fields are set on the trial.

Return type

BaseTrial

Returns

The trial instance.

property run_metadata

Dict containing metadata from the deployment process.

This is set implicitly during trial.run().

Return type

Dict[str, Any]

property runner

The runner object defining how to deploy the trial.

Return type

Optional[Runner]

property status

The status of the trial in the experimentation lifecycle.

Return type

TrialStatus

property time_completed

Completion time of the trial.

Return type

Optional[datetime]

property time_created

Creation time of the trial.

Return type

datetime

property time_run_started

Time the trial was started running (i.e. collecting data).

Return type

Optional[datetime]

property time_staged

Staged time of the trial.

Return type

Optional[datetime]

property trial_type

The type of the trial.

Relevant for experiments containing different kinds of trials (e.g. different deployment types).

Return type

Optional[str]

property ttl_seconds

This trial’s time-to-live once ran, in seconds. If not set, trial will never be automatically considered failed (i.e. infinite TTL). Reflects after how many seconds since the time the trial was run it will be considered failed unless completed.

Return type

Optional[int]

update_run_metadata(metadata)[source]

Updates the run metadata dict stored on this trial and returns the updated dict.

Return type

Dict[str, Any]

class ax.core.base_trial.TrialStatus[source]

Bases: int, enum.Enum

Enum of trial status.

General lifecycle of a trial is::

CANDIDATE --> STAGED --> RUNNING --> COMPLETED
          ------------->         --> FAILED (machine failure)
          -------------------------> ABANDONED (human-initiated action)

Trials may be abandoned at any time prior to completion or failure via human intervention. The difference between abandonment and failure is that the former is human-directed, while the latter is an internal failure state.

Additionally, when trials are deployed, they may be in an intermediate staged state (e.g. scheduled but waiting for resources) or immediately transition to running.

ABANDONED = 5
CANDIDATE = 0
COMPLETED = 3
DISPATCHED = 6
FAILED = 2
RUNNING = 4
STAGED = 1
property expecting_data

True if trial is expecting data.

Return type

bool

property is_abandoned

True if this trial is an abandoned one.

Return type

bool

property is_candidate

True if this trial is a candidate.

Return type

bool

property is_completed

True if this trial is a successfully completed one.

Return type

bool

property is_deployed

True if trial has been deployed but not completed.

Return type

bool

property is_failed

True if this trial is a failed one.

Return type

bool

property is_running

True if this trial is a running one.

Return type

bool

property is_terminal

True if trial is completed.

Return type

bool

ax.core.base_trial.immutable_once_run(func)[source]

Decorator for methods that should throw Error when trial is running or has ever run and immutable.

Return type

Callable

BatchTrial

class ax.core.batch_trial.AbandonedArm[source]

Bases: tuple

Tuple storing metadata of arm that has been abandoned within a BatchTrial.

property name

Alias for field number 0

property reason

Alias for field number 2

property time

Alias for field number 1

class ax.core.batch_trial.BatchTrial(experiment, generator_run=None, trial_type=None, optimize_for_power=False, ttl_seconds=None, index=None)[source]

Bases: ax.core.base_trial.BaseTrial

Batched trial that has multiple attached arms, meant to be deployed and evaluated together, and possibly arm weights, which are a measure of how much of the total resources allocated to evaluating a batch should go towards evaluating the specific arm. For instance, for field experiments the weights could describe the fraction of the total experiment population assigned to the different treatment arms. Interpretation of the weights is defined in Runner.

NOTE: A BatchTrial is not just a trial with many arms; it is a trial, for which it is important that the arms are evaluated simultaneously, e.g. in an A/B test where the evaluation results are subject to nonstationarity. For cases where multiple arms are evaluated separately and independently of each other, use multiple Trial objects with a single arm each.

Parameters
  • experiment (Experiment) – Experiment, to which this trial is attached

  • generator_run (Optional[GeneratorRun]) – GeneratorRun, associated with this trial. This can a also be set later through add_arm or add_generator_run, but a trial’s associated generator run is immutable once set.

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • optimize_for_power (Optional[bool]) – Whether to optimize the weights of arms in this trial such that the experiment’s power to detect effects of certain size is as high as possible. Refer to documentation of BatchTrial.set_status_quo_and_optimize_power for more detail.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

  • index (Optional[int]) – If specified, the trial’s index will be set accordingly. This should generally not be specified, as in the index will be automatically determined based on the number of existing trials. This is only used for the purpose of loading from storage.

property abandoned_arms

List of arms that have been abandoned within this trial

Return type

List[Arm]

property abandoned_arms_metadata
Return type

List[AbandonedArm]

add_arm(*args, **kwargs)
add_arms_and_weights(*args, **kwargs)
add_generator_run(*args, **kwargs)
property arm_weights

The set of arms and associated weights for the trial.

These are constructed by merging the arms and weights from each generator run that is attached to the trial.

Return type

MutableMapping[Arm, float]

property arms

All arms contained in the trial.

Return type

List[Arm]

property arms_by_name

Map from arm name to object for all arms in trial.

Return type

Dict[str, Arm]

clone()[source]

Clone the trial.

Return type

BatchTrial

Returns

A new instance of the trial.

property experiment

The experiment this batch belongs to.

Return type

Experiment

property generator_run_structs

List of generator run structs attached to this trial.

Struct holds generator_run object and the weight with which it was added.

Return type

List[GeneratorRunStruct]

property generator_runs

All generator runs associated with this trial.

Return type

List[GeneratorRun]

property index

The index of this batch within the experiment’s batch list.

Return type

int

property is_factorial

Return true if the trial’s arms are a factorial design with no linked factors.

Return type

bool

mark_arm_abandoned(arm_name, reason=None)[source]

Mark a arm abandoned.

Usually done after deployment when one arm causes issues but user wants to continue running other arms in the batch.

Parameters
  • arm_name (str) – The name of the arm to abandon.

  • reason (Optional[str]) – The reason for abandoning the arm.

Return type

BatchTrial

Returns

The batch instance.

normalized_arm_weights(total=1, trunc_digits=None)[source]

Returns arms with a new set of weights normalized to the given total.

This method is useful for many runners where we need to normalize weights to a certain total without mutating the weights attached to a trial.

Parameters
  • total (float) – The total weight to which to normalize. Default is 1, in which case arm weights can be interpreted as probabilities.

  • trunc_digits (Optional[int]) – The number of digits to keep. If the resulting total weight is not equal to total, re-allocate weight in such a way to maintain relative weights as best as possible.

Return type

MutableMapping[Arm, float]

Returns

Mapping from arms to the new set of weights.

run()[source]

Deploys the trial according to the behavior on the runner.

The runner returns a run_metadata dict containining metadata of the deployment process. It also returns a deployed_name of the trial within the system to which it was deployed. Both these fields are set on the trial.

Return type

BatchTrial

Returns

The trial instance.

set_status_quo_and_optimize_power(*args, **kwargs)
set_status_quo_with_weight(*args, **kwargs)
property status_quo

The control arm for this batch.

Return type

Optional[Arm]

unset_status_quo()[source]

Set the status quo to None.

Return type

None

property weights

Weights corresponding to arms contained in the trial.

Return type

List[float]

class ax.core.batch_trial.GeneratorRunStruct[source]

Bases: tuple

Stores GeneratorRun object as well as the weight with which it was added.

property generator_run

Alias for field number 0

property weight

Alias for field number 1

Data

class ax.core.data.Data(df=None, description=None)[source]

Bases: ax.utils.common.equality.Base

Class storing data for an experiment.

The dataframe is retrieved via the df property. The data can be stored to an external store for future use by attaching it to an experiment using experiment.attach_data() (this requires a description to be set.)

df

DataFrame with underlying data, and required columns.

description

Human-readable description of data.

static column_data_types()[source]

Type specification for all supported columns.

Return type

Dict[str, Type]

property df
Return type

DataFrame

property df_hash

Compute hash of pandas DataFrame.

This first serializes the DataFrame and computes the md5 hash on the resulting string. Note that this may cause performance issue for very large DataFrames.

Parameters

df – The DataFrame for which to compute the hash.

Returns

str: The hash of the DataFrame.

Return type

str

static from_evaluations(evaluations, trial_index, sample_sizes=None, start_time=None, end_time=None)[source]

Convert dict of evaluations to Ax data object.

Parameters
  • evaluations (Dict[str, Dict[str, Tuple[float, Optional[float]]]]) – Map from arm name to metric outcomes (itself a mapping of metric names to tuples of mean and optionally a SEM).

  • trial_index (int) – Trial index to which this data belongs.

  • sample_sizes (Optional[Dict[str, int]]) – Number of samples collected for each arm.

  • start_time (Optional[int]) – Optional start time of run of the trial that produced this data, in milliseconds.

  • end_time (Optional[int]) – Optional end time of run of the trial that produced this data, in milliseconds.

Return type

Data

Returns

Ax Data object.

static from_fidelity_evaluations(evaluations, trial_index, sample_sizes=None, start_time=None, end_time=None)[source]

Convert dict of fidelity evaluations to Ax data object.

Parameters
  • evaluations (Dict[str, List[Tuple[Dict[str, Union[str, bool, float, int, None]], Dict[str, Tuple[float, Optional[float]]]]]]) – Map from arm name to list of (fidelity, metric outcomes) (where metric outcomes is itself a mapping of metric names to tuples of mean and SEM).

  • trial_index (int) – Trial index to which this data belongs.

  • sample_sizes (Optional[Dict[str, int]]) – Number of samples collected for each arm.

  • start_time (Optional[int]) – Optional start time of run of the trial that produced this data, in milliseconds.

  • end_time (Optional[int]) – Optional end time of run of the trial that produced this data, in milliseconds.

Return type

Data

Returns

Ax Data object.

static from_multiple_data(data)[source]
Return type

Data

static required_columns()[source]

Names of required columns.

Return type

Set[str]

ax.core.data.clone_without_metrics(data, excluded_metric_names)[source]

Returns a new data object where rows containing the metrics specified by metric_names are filtered out. Used to sanitize data before using it as training data for a model that requires data rectangularity.

Parameters
  • data (Data) – Original data to clone.

  • excluded_metric_names (Iterable[str]) – Metrics to avoid copying

Return type

Data

Returns

new version of the original data without specified metrics.

ax.core.data.custom_data_class(column_data_types=None, required_columns=None, time_columns=None)[source]

Creates a custom data class with additional columns.

All columns and their designations on the base data class are preserved, the inputs here are appended to the definitions on the base class.

Parameters
  • column_data_types (Optional[Dict[str, Type]]) – Dict from column name to column type.

  • required_columns (Optional[Set[str]]) – Set of additional columns required for this data object.

  • time_columns (Optional[Set[str]]) – Set of additional columns to cast to timestamp.

Return type

Type[Data]

Returns

New data subclass with amended column definitions.

ax.core.data.set_single_trial(data)[source]

Returns a new Data object where we set all rows to have the same trial index (i.e. 0). This is meant to be used with our IVW transform, which will combine multiple observations of the same metric.

Return type

Data

Experiment

class ax.core.experiment.Experiment(search_space, name=None, optimization_config=None, tracking_metrics=None, runner=None, status_quo=None, description=None, is_test=False, experiment_type=None, properties=None)[source]

Bases: ax.utils.common.equality.Base

Base class for defining an experiment.

add_tracking_metric(metric)[source]

Add a new metric to the experiment.

Parameters

metric (Metric) – Metric to be added.

Return type

Experiment

add_tracking_metrics(metrics)[source]

Add a list of new metrics to the experiment.

If any of the metrics are already defined on the experiment, we raise an error and don’t add any of them to the experiment

Parameters

metrics (List[Metric]) – Metrics to be added.

Return type

Experiment

property arms_by_name

The arms belonging to this experiment, by their name.

Return type

Dict[str, Arm]

property arms_by_signature

The arms belonging to this experiment, by their signature.

Return type

Dict[str, Arm]

attach_data(data, combine_with_last_data=False)[source]

Attach data to experiment. Stores data in experiment._data_by_trial, to be looked up via experiment.lookup_data_for_trial.

Parameters
  • data (Data) – Data object to store.

  • combine_with_last_data (bool) – By default, when attaching data, it’s identified by its timestamp, and experiment.lookup_data_for_trial returns data by most recent timestamp. In some cases, however, the goal is to combine all data attached for a trial into a single Data object. To achieve that goal, every call to attach_data after the initial data is attached to trials, should be set to True. Then, the newly attached data will be appended to existing data, rather than stored as a separate object, and lookup_data_for_trial will return the combined data object, rather than just the most recently added data. This will validate that the newly added data does not contain observations for the metrics that already have observations in the most recent data stored.

Return type

int

Returns

Timestamp of storage in millis.

property data_by_trial

Data stored on the experiment, indexed by trial index and storage time.

First key is trial index and second key is storage time in milliseconds. For a given trial, data is ordered by storage time, so first added data will appear first in the list.

Return type

Dict[int, OrderedDict]

property default_trial_type

Default trial type assigned to trials in this experiment.

In the base experiment class this is always None. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type

Optional[str]

property experiment_type

The type of the experiment.

Return type

Optional[str]

fetch_data(metrics=None, **kwargs)[source]

Fetches data for all metrics and trials on this experiment.

Parameters
  • metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment.

  • kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions.

Return type

Data

Returns

Data for the experiment.

fetch_trials_data(trial_indices, metrics=None, **kwargs)[source]

Fetches data for specific trials on the experiment.

Parameters
  • trial_indices (Iterable[int]) – Indices of trials, for which to fetch data.

  • metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment.

  • kwargs (Any) – Keyword args to pass to underlying metrics’ fetch data functions.

Return type

Data

Returns

Data for the specific trials on the experiment.

get_trials_by_indices(trial_indices)[source]

Grabs trials on this experiment by their indices.

Return type

List[BaseTrial]

property has_name

Return true if experiment’s name is not None.

Return type

bool

property immutable_search_space_and_opt_config

Boolean representing whether search space and metrics on this experiment are mutable (by default they are).

NOTE: For experiments with immutable search spaces and metrics, generator runs will not store copies of search space and metrics, which improves storage layer performance. Not keeping copies of those on generator runs also disables keeping track of changes to search space and metrics, thereby necessitating that those attributes be immutable on experiment.

Return type

bool

property is_simple_experiment

Whether this experiment is a regular Experiment or the subclassing SimpleExperiment.

lookup_data_for_trial(trial_index)[source]

Lookup stored data for a specific trial.

Returns latest data object, and its storage timestamp, present for this trial. Returns empty data and -1 if no data present.

Parameters

trial_index (int) – The index of the trial to lookup data for.

Return type

Tuple[Data, int]

Returns

The requested data object, and its storage timestamp in milliseconds.

lookup_data_for_ts(timestamp)[source]

Collect data for all trials stored at this timestamp.

Useful when many trials’ data was fetched and stored simultaneously and user wants to retrieve same collection of data later.

Can also be used to lookup specific data for a single trial when storage time is known.

Parameters

timestamp (int) – Timestamp in millis at which data was stored.

Return type

Data

Returns

Data object with all data stored at the timestamp.

property metrics

The metrics attached to the experiment.

Return type

Dict[str, Metric]

property name

Get experiment name. Throws if name is None.

Return type

str

new_batch_trial(generator_run=None, trial_type=None, optimize_for_power=False, ttl_seconds=None)[source]

Create a new batch trial associated with this experiment.

Parameters
  • generator_run (Optional[GeneratorRun]) – GeneratorRun, associated with this trial. This can a also be set later through add_arm or add_generator_run, but a trial’s associated generator run is immutable once set.

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • optimize_for_power (Optional[bool]) – Whether to optimize the weights of arms in this trial such that the experiment’s power to detect effects of certain size is as high as possible. Refer to documentation of BatchTrial.set_status_quo_and_optimize_power for more detail.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

Return type

BatchTrial

new_trial(generator_run=None, trial_type=None, ttl_seconds=None)[source]

Create a new trial associated with this experiment.

Parameters
  • generator_run (Optional[GeneratorRun]) – GeneratorRun, associated with this trial. Trial has only one generator run (and thus arm) attached to it. This can also be set later through add_arm or add_generator_run, but a trial’s associated generator run is immutable once set.

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

Return type

Trial

property num_abandoned_arms

How many arms attached to this experiment are abandoned.

Return type

int

property num_trials

How many trials are associated with this experiment.

Return type

int

property optimization_config

The experiment’s optimization config.

Return type

Optional[OptimizationConfig]

property parameters

The parameters in the experiment’s search space.

Return type

Dict[str, Parameter]

remove_tracking_metric(metric_name)[source]

Remove a metric that already exists on the experiment.

Parameters

metric_name (str) – Unique name of metric to remove.

Return type

Experiment

reset_runners(runner)[source]

Replace all candidate trials runners.

Parameters

runner (Runner) – New runner to replace with.

Return type

None

runner_for_trial(trial)[source]

The default runner to use for a given trial.

In the base experiment class, this is always the default experiment runner. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type

Optional[Runner]

property search_space

The search space for this experiment.

When setting a new search space, all parameter names and types must be preserved. However, if no trials have been created, all modifications are allowed.

Return type

SearchSpace

property status_quo

The existing arm that new arms will be compared against.

Return type

Optional[Arm]

property sum_trial_sizes

Sum of numbers of arms attached to each trial in this experiment.

Return type

int

supports_trial_type(trial_type)[source]

Whether this experiment allows trials of the given type.

The base experiment class only supports None. For experiments with multiple trial types, use the MultiTypeExperiment class.

Return type

bool

property time_created

Creation time of the experiment.

Return type

datetime

property trial_indices_by_status

Indices of trials associated with the experiment, grouped by trial status.

Return type

Dict[TrialStatus, Set[int]]

property trials

The trials associated with the experiment.

NOTE: If some trials on this experiment specify their TTL, RUNNING trials will be checked for whether their TTL elapsed during this call. Found past- TTL trials will be marked as FAILED.

Return type

Dict[int, BaseTrial]

property trials_by_status

Trials associated with the experiment, grouped by trial status.

Return type

Dict[TrialStatus, List[BaseTrial]]

property trials_expecting_data

the list of all trials for which data has arrived or is expected to arrive.

Type

List[BaseTrial]

Return type

List[BaseTrial]

update_tracking_metric(metric)[source]

Redefine a metric that already exists on the experiment.

Parameters

metric (Metric) – New metric definition.

Return type

Experiment

GeneratorRun

class ax.core.generator_run.ArmWeight[source]

Bases: tuple

NamedTuple for tying together arms and weights.

property arm

Alias for field number 0

property weight

Alias for field number 1

class ax.core.generator_run.GeneratorRun(arms, weights=None, optimization_config=None, search_space=None, model_predictions=None, best_arm_predictions=None, type=None, fit_time=None, gen_time=None, model_key=None, model_kwargs=None, bridge_kwargs=None, gen_metadata=None, model_state_after_gen=None, generation_step_index=None, candidate_metadata_by_arm_signature=None)[source]

Bases: ax.utils.common.equality.Base

An object that represents a single run of a generator.

This object is created each time the gen method of a generator is called. It stores the arms and (optionally) weights that were generated by the run. When we add a generator run to a trial, its arms and weights will be merged with those from previous generator runs that were already attached to the trial.

property arm_signatures

Returns signatures of arms generated by this run.

Return type

Set[str]

property arm_weights

Mapping from arms to weights (order matches order in arms property).

Return type

MutableMapping[Arm, float]

property arms

Returns arms generated by this run.

Return type

List[Arm]

property best_arm_predictions
Return type

Optional[Tuple[Arm, Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]]

property candidate_metadata_by_arm_signature

Retrieves model-produced candidate metadata as a mapping from arm name (for the arm the candidate became when added to experiment) to the metadata dict.

Return type

Optional[Dict[str, Optional[Dict[str, Any]]]]

clone()[source]

Return a deep copy of a GeneratorRun.

Return type

GeneratorRun

property fit_time
Return type

Optional[float]

property gen_metadata

Returns metadata generated by this run.

Return type

Optional[Dict[str, Any]]

property gen_time
Return type

Optional[float]

property generator_run_type

The type of the generator run.

Return type

Optional[str]

property index

The index of this generator run within a trial’s list of generator run structs. This field is set when the generator run is added to a trial.

Return type

Optional[int]

property model_predictions
Return type

Optional[Tuple[Dict[str, List[float]], Dict[str, Dict[str, List[float]]]]]

property model_predictions_by_arm
Return type

Optional[Dict[str, Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]

property optimization_config

The optimization config used during generation of this run.

Return type

Optional[OptimizationConfig]

property param_df

Constructs a Pandas dataframe with the parameter values for each arm.

Useful for inspecting the contents of a generator run.

Returns

a dataframe with the generator run’s arms.

Return type

pd.DataFrame

property search_space

The search used during generation of this run.

Return type

Optional[SearchSpace]

split_by_arm(populate_all_fields=False)[source]

Return a list of generator runs, each with all the metadata of generator run, but only with one of its arms. Useful when splitting a single generator run into multiple 1-arm trials.

Parameters

populate_all_fields (bool) – By default, split_by_arm only sets some fields on the new, ‘split’ generator runs, in order to avoid creating multiple large objects and increasing the size of an experiment object. To force-populate all fields of the ‘split’ generator runs, set ‘populate_all_fields’ to True.

Return type

List[GeneratorRun]

property time_created

Creation time of the batch.

Return type

datetime

property weights

Returns weights associated with arms generated by this run.

Return type

List[float]

class ax.core.generator_run.GeneratorRunType[source]

Bases: enum.Enum

Class for enumerating generator run types.

MANUAL = 1
STATUS_QUO = 0
ax.core.generator_run.extract_arm_predictions(model_predictions, arm_idx)[source]

Extract a particular arm from model_predictions.

Parameters
Return type

Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]

Returns

(mean, cov) for specified arm.

Metric

class ax.core.metric.Metric(name, lower_is_better=None, properties=None)[source]

Bases: ax.utils.common.equality.Base

Base class for representing metrics.

The fetch_trial_data method is the essential method to override when subclassing, which specifies how to retrieve a Metric, for a given trial.

A Metric must return a Data object, which requires (at minimum) the following:

https://ax.dev/api/_modules/ax/core/data.html#Data.required_columns

lower_is_better

Flag for metrics which should be minimized.

properties

Properties specific to a particular metric.

clone()[source]

Create a copy of this Metric.

Return type

ForwardRef

classmethod deserialize_init_args(args)[source]

Given a dictionary, extract the properties needed to initialize the metric. Used for storage.

Return type

Dict[str, Any]

fetch_experiment_data(experiment, **kwargs)[source]

Fetch this metric’s data for an experiment.

Default behavior is to fetch data from all trials expecting data and concatenate the results.

Return type

Data

classmethod fetch_experiment_data_multi(experiment, metrics, trials=None, **kwargs)[source]

Fetch multiple metrics data for an experiment.

Default behavior calls fetch_trial_data_multi for each trial. Subclasses should override to batch data computation across trials + metrics.

Return type

Data

property fetch_multi_group_by_metric

Metric class, with which to group this metric in Experiment._metrics_by_class, which is used to combine metrics on experiment into groups and then fetch their data via Metric.fetch_trial_data_multi for each group.

NOTE: By default, this property will just return the class on which it is defined; however, in some cases it is useful to group metrics by their superclass, in which case this property should return that superclass.

Return type

Type[Metric]

fetch_trial_data(trial, **kwargs)[source]

Fetch data for one trial.

Return type

Data

classmethod fetch_trial_data_multi(trial, metrics, **kwargs)[source]

Fetch multiple metrics data for one trial.

Default behavior calls fetch_trial_data for each metric. Subclasses should override this to trial data computation for multiple metrics.

Return type

Data

property name

Get name of metric.

Return type

str

classmethod serialize_init_args(metric)[source]

Serialize the properties needed to initialize the metric. Used for storage.

Return type

Dict[str, Any]

MultiTypeExperiment

class ax.core.multi_type_experiment.MultiTypeExperiment(name, search_space, default_trial_type, default_runner, optimization_config=None, status_quo=None, description=None, is_test=False, experiment_type=None, properties=None)[source]

Bases: ax.core.experiment.Experiment

Class for experiment with multiple trial types.

A canonical use case for this is tuning a large production system with limited evaluation budget and a simulator which approximates evaluations on the main system. Trial deployment and data fetching is separate for the two systems, but the final data is combined and fed into multi-task models.

See the Multi-Task Modeling tutorial for more details.

name

Name of the experiment.

description

Description of the experiment.

add_tracking_metric(metric, trial_type, canonical_name=None)[source]

Add a new metric to the experiment.

Parameters
  • metric (Metric) – The metric to add.

  • trial_type (str) – The trial type for which this metric is used.

  • canonical_name (Optional[str]) – The default metric for which this metric is a proxy.

Return type

MultiTypeExperiment

add_trial_type(trial_type, runner)[source]

Add a new trial_type to be supported by this experiment.

Parameters
  • trial_type (str) – The new trial_type to be added.

  • runner (Runner) – The default runner for trials of this type.

Return type

MultiTypeExperiment

property default_trial_type

Default trial type assigned to trials in this experiment.

Return type

Optional[str]

property default_trials

Return the indicies for trials of the default type.

Return type

Set[int]

fetch_data(metrics=None, **kwargs)[source]

Fetches data for all metrics and trials on this experiment.

Parameters
  • metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment.

  • kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions.

Return type

Data

Returns

Data for the experiment.

property metric_to_trial_type

Map metrics to trial types.

Adds in default trial type for OC metrics to custom defined trial types..

Return type

Dict[str, str]

remove_tracking_metric(metric_name)[source]

Remove a metric that already exists on the experiment.

Parameters

metric_name (str) – Unique name of metric to remove.

Return type

MultiTypeExperiment

reset_runners(runner)[source]

Replace all candidate trials runners.

Parameters

runner (Runner) – New runner to replace with.

Return type

None

runner_for_trial(trial)[source]

The default runner to use for a given trial.

Looks up the appropriate runner for this trial type in the trial_type_to_runner.

Return type

Optional[Runner]

supports_trial_type(trial_type)[source]

Whether this experiment allows trials of the given type.

Only trial types defined in the trial_type_to_runner are allowed.

Return type

bool

update_runner(trial_type, runner)[source]

Update the default runner for an existing trial_type.

Parameters
  • trial_type (str) – The new trial_type to be added.

  • runner (Runner) – The new runner for trials of this type.

Return type

MultiTypeExperiment

update_tracking_metric(metric, trial_type, canonical_name=None)[source]

Update an existing metric on the experiment.

Parameters
  • metric (Metric) – The metric to add.

  • trial_type (str) – The trial type for which this metric is used.

  • canonical_name (Optional[str]) – The default metric for which this metric is a proxy.

Return type

MultiTypeExperiment

Objective

class ax.core.objective.MultiObjective(metrics, minimize=False, **extra_kwargs)[source]

Bases: ax.core.objective.Objective

Class for an objective composed of a multiple component objectives.

The Acquisition function determines how the objectives are weighted.

metrics

List of metrics.

clone()[source]

Create a copy of the objective.

Return type

Objective

get_unconstrainable_metrics()[source]

Return a list of metrics that are incompatible with OutcomeConstraints.

Return type

List[Metric]

property metric

Override base method to error.

Return type

Metric

property metric_weights

Get the objective metrics and weights.

Return type

Iterable[Tuple[Metric, float]]

property metrics

Get the objective metrics.

Return type

List[Metric]

weights: List[float] = None
class ax.core.objective.Objective(metric, minimize=None)[source]

Bases: ax.utils.common.equality.Base

Base class for representing an objective.

minimize

If True, minimize metric.

clone()[source]

Create a copy of the objective.

Return type

Objective

get_unconstrainable_metrics()[source]

Return a list of metrics that are incompatible with OutcomeConstraints.

Return type

List[Metric]

property metric

Get the objective metric.

Return type

Metric

property metrics

Get a list of objective metrics.

Return type

List[Metric]

class ax.core.objective.ScalarizedObjective(metrics, weights=None, minimize=False)[source]

Bases: ax.core.objective.MultiObjective

Class for an objective composed of a linear scalarization of metrics.

metrics

List of metrics.

weights

Weights for scalarization; default to 1.

clone()[source]

Create a copy of the objective.

Return type

Objective

weights: List[float] = None

Observation

class ax.core.observation.Observation(features, data, arm_name=None)[source]

Bases: ax.utils.common.equality.Base

Represents an observation.

A set of features (ObservationFeatures) and corresponding measurements (ObservationData). Optionally, an arm name associated with the features.

features
Type

ObservationFeatures

data
Type

ObservationData

arm_name
Type

Optional[str]

class ax.core.observation.ObservationData(metric_names, means, covariance)[source]

Bases: ax.utils.common.equality.Base

Outcomes observed at a point.

The “point” corresponding to this ObservationData would be an ObservationFeatures object.

metric_names

A list of k metric names that were observed

means

a k-array of observed means

covariance

a (k x k) array of observed covariances

class ax.core.observation.ObservationFeatures(parameters, trial_index=None, start_time=None, end_time=None, random_split=None, metadata=None)[source]

Bases: ax.utils.common.equality.Base

The features of an observation.

These include both the arm parameters and the features of the observation found in the Data object: trial index, times, and random split. This object is meant to contain everything needed to represent this observation in a model feature space. It is essentially a row of Data joined with the arm parameters.

An ObservationFeatures object would typically have a corresponding ObservationData object that provides the observed outcomes.

parameters

arm parameters

trial_index

trial index

start_time

batch start time

end_time

batch end time

random_split

random split

static from_arm(arm, trial_index=None, start_time=None, end_time=None, random_split=None, metadata=None)[source]

Convert a Arm to an ObservationFeatures, including additional data as specified.

Return type

ObservationFeatures

update_features(new_features)[source]

Updates the existing ObservationFeatures with the fields of the the input.

Adds all of the new parameters to the existing parameters and overwrites any other fields that are not None on the new input features.

Return type

ObservationFeatures

ax.core.observation.observations_from_data(experiment, data)[source]

Convert Data to observations.

Converts a Data object to a list of Observation objects. Pulls arm parameters from from experiment. Overrides fidelity parameters in the arm with those found in the Data object.

Uses a diagonal covariance matrix across metric_names.

Parameters
  • experiment (Experiment) – Experiment with arm parameters.

  • data (Data) – Data of observations.

Return type

List[Observation]

Returns

List of Observation objects.

ax.core.observation.separate_observations(observations, copy=False)[source]

Split out observations into features+data.

Parameters

observations (List[Observation]) – input observations

Returns

ObservationFeatures observation_data: ObservationData

Return type

observation_features

OptimizationConfig

class ax.core.optimization_config.OptimizationConfig(objective, outcome_constraints=None)[source]

Bases: ax.utils.common.equality.Base

An optimization configuration, which comprises an objective and outcome constraints.

There is no minimum or maximum number of outcome constraints, but an individual metric can have at most two constraints–which is how we represent metrics with both upper and lower bounds.

clone()[source]

Make a copy of this optimization config.

Return type

OptimizationConfig

property metrics
Return type

Dict[str, Metric]

property objective

Get objective.

Return type

Objective

property outcome_constraints

Get outcome constraints.

Return type

List[OutcomeConstraint]

OutcomeConstraint

class ax.core.outcome_constraint.OutcomeConstraint(metric, op, bound, relative=True)[source]

Bases: ax.utils.common.equality.Base

Base class for representing outcome constraints.

Outcome constraints may of the form metric >= bound or metric <= bound, where the bound can be expressed as an absolute measurement or relative to the status quo (if applicable).

metric

Metric to constrain.

op

Specifies whether metric should be greater or equal to, or less than or equal to, some bound.

bound

The bound in the constraint.

relative

Whether you want to bound on an absolute or relative scale. If relative, bound is the acceptable percent change.

clone()[source]

Create a copy of this OutcomeConstraint.

Return type

OutcomeConstraint

property metric
Return type

Metric

property op
Return type

ComparisonOp

Parameter

class ax.core.parameter.ChoiceParameter(name, parameter_type, values, is_ordered=False, is_task=False, is_fidelity=False, target_value=None)[source]

Bases: ax.core.parameter.Parameter

Parameter object that specifies a discrete set of values.

add_values(values)[source]

Add input list to the set of allowed values for parameter.

Cast all input values to the parameter type.

Parameters

values (List[Union[str, bool, float, int, None]]) – Values being added to the allowed list.

Return type

ChoiceParameter

clone()[source]
Return type

ChoiceParameter

property is_ordered
Return type

bool

property is_task
Return type

bool

property name
Return type

str

property parameter_type
Return type

ParameterType

set_values(values)[source]

Set the list of allowed values for parameter.

Cast all input values to the parameter type.

Parameters

values (List[Union[str, bool, float, int, None]]) – New list of allowed values.

Return type

ChoiceParameter

validate(value)[source]

Checks that the input is in the list of allowed values.

Parameters

value (Union[str, bool, float, int, None]) – Value being checked.

Return type

bool

Returns

True if valid, False otherwise.

property values
Return type

List[Union[str, bool, float, int, None]]

class ax.core.parameter.FixedParameter(name, parameter_type, value, is_fidelity=False, target_value=None)[source]

Bases: ax.core.parameter.Parameter

Parameter object that specifies a single fixed value.

clone()[source]
Return type

FixedParameter

property name
Return type

str

property parameter_type
Return type

ParameterType

set_value(value)[source]
Return type

FixedParameter

validate(value)[source]

Checks that the input is equal to the fixed value.

Parameters

value (Union[str, bool, float, int, None]) – Value being checked.

Return type

bool

Returns

True if valid, False otherwise.

property value
Return type

Union[str, bool, float, int, None]

class ax.core.parameter.Parameter[source]

Bases: ax.utils.common.equality.Base

cast(value)[source]
Return type

Union[str, bool, float, int, None]

clone()[source]
Return type

Parameter

property is_fidelity
Return type

bool

property is_numeric
Return type

bool

is_valid_type(value)[source]

Whether a given value’s type is allowed by this parameter.

Return type

bool

abstract property name
Return type

str

abstract property parameter_type
Return type

ParameterType

property python_type

The python type for the corresponding ParameterType enum.

Used primarily for casting values of unknown type to conform to that of the parameter.

Return type

Union[Type[int], Type[float], Type[str], Type[bool]]

property target_value
Return type

Union[str, bool, float, int, None]

abstract validate(value)[source]
Return type

bool

class ax.core.parameter.ParameterType[source]

Bases: enum.Enum

An enumeration.

BOOL: int = 0
FLOAT: int = 2
INT: int = 1
STRING: int = 3
property is_numeric
Return type

bool

class ax.core.parameter.RangeParameter(name, parameter_type, lower, upper, log_scale=False, digits=None, is_fidelity=False, target_value=None)[source]

Bases: ax.core.parameter.Parameter

Parameter object that specifies a continuous numerical range of values.

cast(value)[source]
Return type

Union[str, bool, float, int, None]

clone()[source]
Return type

RangeParameter

property digits

Number of digits to round values to for float type.

Upper and lower bound are re-cast after this property is changed.

Return type

Optional[int]

is_valid_type(value)[source]

Same as default except allows floats whose value is an int for Int parameters.

Return type

bool

property log_scale

Whether to sample in log space when drawing random values of the parameter.

Return type

bool

property lower

Lower bound of the parameter range.

Value is cast to parameter type upon set and also validated to ensure the bound is strictly less than upper bound.

Return type

float

property name
Return type

str

property parameter_type
Return type

ParameterType

set_digits(digits)[source]
Return type

RangeParameter

set_log_scale(log_scale)[source]
Return type

RangeParameter

update_range(lower=None, upper=None)[source]

Set the range to the given values.

If lower or upper is not provided, it will be left at its current value.

Parameters
Return type

RangeParameter

property upper

Upper bound of the parameter range.

Value is cast to parameter type upon set and also validated to ensure the bound is strictly greater than lower bound.

Return type

float

validate(value)[source]

Returns True if input is a valid value for the parameter.

Checks that value is of the right type and within the valid range for the parameter. Returns False if value is None.

Parameters

value (Union[str, bool, float, int, None]) – Value being checked.

Return type

bool

Returns

True if valid, False otherwise.

ParameterConstraint

class ax.core.parameter_constraint.OrderConstraint(lower_parameter, upper_parameter)[source]

Bases: ax.core.parameter_constraint.ParameterConstraint

Constraint object for specifying one parameter to be smaller than another.

clone()[source]

Clone.

Return type

OrderConstraint

clone_with_transformed_parameters(transformed_parameters)[source]

Clone, but replace parameters with transformed versions.

Return type

OrderConstraint

property constraint_dict

Weights on parameters for linear constraint representation.

Return type

Dict[str, float]

property lower_parameter

Parameter with lower value.

Return type

Parameter

property parameters

Parameters.

Return type

List[Parameter]

property upper_parameter

Parameter with higher value.

Return type

Parameter

class ax.core.parameter_constraint.ParameterConstraint(constraint_dict, bound)[source]

Bases: ax.utils.common.equality.Base

Base class for linear parameter constraints.

Constraints are expressed using a map from parameter name to weight followed by a bound.

The constraint is satisfied if w * v <= b where:

w is the vector of parameter weights. v is a vector of parameter values. b is the specified bound. * is the dot product operator.

property bound

Get bound of the inequality of the constraint.

Return type

float

check(parameter_dict)[source]

Whether or not the set of parameter values satisfies the constraint.

Does a weighted sum of the parameter values based on the constraint_dict and checks that the sum is less than the bound.

Parameters

parameter_dict (Dict[str, Union[int, float]]) – Map from parameter name to parameter value.

Return type

bool

Returns

Whether the constraint is satisfied.

clone()[source]

Clone.

Return type

ParameterConstraint

clone_with_transformed_parameters(transformed_parameters)[source]

Clone, but replaced parameters with transformed versions.

Return type

ParameterConstraint

property constraint_dict

Get mapping from parameter names to weights.

Return type

Dict[str, float]

class ax.core.parameter_constraint.SumConstraint(parameters, is_upper_bound, bound)[source]

Bases: ax.core.parameter_constraint.ParameterConstraint

Constraint on the sum of parameters being greater or less than a bound.

clone()[source]

Clone.

To use the same constraint, we need to reconstruct the original bound. We do this by re-applying the original bound weighting.

Return type

SumConstraint

clone_with_transformed_parameters(transformed_parameters)[source]

Clone, but replace parameters with transformed versions.

Return type

SumConstraint

property constraint_dict

Weights on parameters for linear constraint representation.

Return type

Dict[str, float]

property op

Whether the sum is constrained by a <= or >= inequality.

Return type

ComparisonOp

property parameters

Parameters.

Return type

List[Parameter]

ax.core.parameter_constraint.validate_constraint_parameters(parameters)[source]

Basic validation of parameters used in a constraint.

Parameters

parameters (List[Parameter]) – Parameters used in constraint.

Raises

ValueError if the parameters are not valid for use.

Return type

None

Runner

class ax.core.runner.Runner[source]

Bases: ax.utils.common.equality.Base, abc.ABC

Abstract base class for custom runner classes

classmethod deserialize_init_args(args)[source]

Given a dictionary, deserialize the properties needed to initialize the runner. Used for storage.

Return type

Dict[str, Any]

abstract run(trial)[source]

Deploys a trial based on custom runner subclass implementation.

Parameters

trial (BaseTrial) – The trial to deploy.

Return type

Dict[str, Any]

Returns

Dict of run metadata from the deployment process.

classmethod serialize_init_args(runner)[source]

Serialize the properties needed to initialize the runner. Used for storage.

Return type

Dict[str, Any]

property staging_required

Whether the trial goes to staged or running state once deployed.

Return type

bool

stop(trial)[source]

Stop a trial based on custom runner subclass implementation.

Optional to implement

Parameters

trial (BaseTrial) – The trial to deploy.

Return type

None

SearchSpace

class ax.core.search_space.SearchSpace(parameters, parameter_constraints=None)[source]

Bases: ax.utils.common.equality.Base

Base object for SearchSpace object.

Contains a set of Parameter objects, each of which have a name, type, and set of valid values. The search space also contains a set of ParameterConstraint objects, which can be used to define restrictions across parameters (e.g. p_a < p_b).

add_parameter(parameter)[source]
Return type

None

add_parameter_constraints(parameter_constraints)[source]
Return type

None

cast_arm(arm)[source]

Cast parameterization of given arm to the types in this SearchSpace.

For each parameter in given arm, cast it to the proper type specified in this search space. Throws if there is a mismatch in parameter names. This is mostly useful for int/float, which user can be sloppy with when hand written.

Parameters

arm (Arm) – Arm to cast.

Return type

Arm

Returns

New casted arm.

check_membership(parameterization, raise_error=False)[source]

Whether the given parameterization belongs in the search space.

Checks that the given parameter values have the same name/type as search space parameters, are contained in the search space domain, and satisfy the parameter constraints.

Parameters
  • parameterization (Dict[str, Union[str, bool, float, int, None]]) – Dict from parameter name to value to validate.

  • raise_error (bool) – If true parameterization does not belong, raises an error with detailed explanation of why.

Return type

bool

Returns

Whether the parameterization is contained in the search space.

check_types(parameterization, allow_none=True, raise_error=False)[source]

Checks that the given parameterization’s types match the search space.

Checks that the names of the parameterization match those specified in the search space, and the given values are of the correct type.

Parameters
  • parameterization (Dict[str, Union[str, bool, float, int, None]]) – Dict from parameter name to value to validate.

  • allow_none (bool) – Whether None is a valid parameter value.

  • raise_error (bool) – If true and parameterization does not belong, raises an error with detailed explanation of why.

Return type

bool

Returns

Whether the parameterization has valid types.

clone()[source]
Return type

SearchSpace

construct_arm(parameters=None, name=None)[source]

Construct new arm using given parameters and name. Any missing parameters fallback to the experiment defaults, represented as None

Return type

Arm

out_of_design_arm()[source]

Create a default out-of-design arm.

An out of design arm contains values for some parameters which are outside of the search space. In the modeling conversion, these parameters are all stripped down to an empty dictionary, since the point is already outside of the modeled space.

Return type

Arm

Returns

New arm w/ null parameter values.

property parameter_constraints
Return type

List[ParameterConstraint]

property parameters
Return type

Dict[str, Parameter]

set_parameter_constraints(parameter_constraints)[source]
Return type

None

property tunable_parameters
Return type

Dict[str, Parameter]

update_parameter(parameter)[source]
Return type

None

SimpleExperiment

class ax.core.simple_experiment.SimpleExperiment(search_space, name=None, objective_name=None, evaluation_function=<function unimplemented_evaluation_function>, minimize=False, outcome_constraints=None, status_quo=None, properties=None)[source]

Bases: ax.core.experiment.Experiment

Simplified experiment class with defaults.

Parameters
add_tracking_metric(metric)[source]

Add a new metric to the experiment.

Parameters

metric (Metric) – Metric to be added.

Return type

SimpleExperiment

eval()[source]

Evaluate all arms in the experiment with the evaluation function passed as argument to this SimpleExperiment.

Return type

Data

eval_trial(trial)[source]

Evaluate trial arms with the evaluation function of this experiment.

Parameters

trial (BaseTrial) – trial, whose arms to evaluate.

Return type

Data

property evaluation_function

Get the evaluation function.

Return type

Callable[[Dict[str, Union[str, bool, float, int, None]], Optional[float]], Union[Dict[str, Tuple[float, Optional[float]]], Tuple[float, Optional[float]], float, List[Tuple[Dict[str, Union[str, bool, float, int, None]], Dict[str, Tuple[float, Optional[float]]]]]]]

evaluation_function_outer(parameterization, weight=None)[source]
Return type

Dict[str, Tuple[float, Optional[float]]]

fetch_data(metrics=None, **kwargs)[source]

Fetches data for all metrics and trials on this experiment.

Parameters
  • metrics (Optional[List[Metric]]) – If provided, fetch data for these metrics instead of the ones defined on the experiment.

  • kwargs (Any) – keyword args to pass to underlying metrics’ fetch data functions.

Return type

Data

Returns

Data for the experiment.

property has_evaluation_function

Whether this SimpleExperiment has a valid evaluation function attached.

Return type

bool

property is_simple_experiment

Whether this experiment is a regular Experiment or the subclassing SimpleExperiment.

update_tracking_metric(metric)[source]

Redefine a metric that already exists on the experiment.

Parameters

metric (Metric) – New metric definition.

Return type

SimpleExperiment

ax.core.simple_experiment.unimplemented_evaluation_function(parameterization, weight=None)[source]

Default evaluation function used if none is provided during initialization. The evaluation function must be manually set before use.

Return type

Union[Dict[str, Tuple[float, Optional[float]]], Tuple[float, Optional[float]], float, List[Tuple[Dict[str, Union[str, bool, float, int, None]], Dict[str, Tuple[float, Optional[float]]]]]]

Trial

class ax.core.trial.Trial(experiment, generator_run=None, trial_type=None, ttl_seconds=None, index=None)[source]

Bases: ax.core.base_trial.BaseTrial

Trial that only has one attached arm and no arm weights.

Parameters
  • experiment (Experiment) – Experiment, to which this trial is attached.

  • generator_run (Optional[GeneratorRun]) – GeneratorRun, associated with this trial. Trial has only one generator run (of just one arm) attached to it. This can also be set later through add_arm or add_generator_run, but a trial’s associated genetor run is immutable once set.

  • trial_type (Optional[str]) – Type of this trial, if used in MultiTypeExperiment.

  • ttl_seconds (Optional[int]) – If specified, trials will be considered failed after this many seconds since the time the trial was ran, unless the trial is completed before then. Meant to be used to detect ‘dead’ trials, for which the evaluation process might have crashed etc., and which should be considered failed after their ‘time to live’ has passed.

  • index (Optional[int]) – If specified, the trial’s index will be set accordingly. This should generally not be specified, as in the index will be automatically determined based on the number of existing trials. This is only used for the purpose of loading from storage.

property abandoned_arms

Abandoned arms attached to this trial.

Return type

List[Arm]

add_arm(*args, **kwargs)
add_generator_run(*args, **kwargs)
property arm

The arm associated with this batch.

Return type

Optional[Arm]

property arms

All arms attached to this trial.

Returns

list of a single arm

attached to this trial if there is one, else None.

Return type

arms

property arms_by_name

Dictionary of all arms attached to this trial with their names as keys.

Returns

dictionary of a single

arm name to arm if one is attached to this trial, else None.

Return type

arms

property generator_run

Generator run attached to this trial.

Return type

Optional[GeneratorRun]

property generator_runs

All generator runs associated with this trial.

Return type

List[GeneratorRun]

get_metric_mean(metric_name)[source]

Metric mean for the arm attached to this trial, retrieved from the latest data available for the metric for the trial.

Return type

float

property objective_mean

Objective mean for the arm attached to this trial, retrieved from the latest data available for the objective for the trial.

Note: the retrieved objective is the experiment-level objective at the time of the call to objective_mean, which is not necessarily the objective that was set at the time the trial was created or ran.

Return type

float

Core Types

class ax.core.types.ComparisonOp[source]

Bases: enum.Enum

Class for enumerating comparison operations.

GEQ: int = 0
LEQ: int = 1
ax.core.types.merge_model_predict(predict, predict_append)[source]

Append model predictions to an existing set of model predictions.

TModelPredict is of the form:

{metric_name: [mean1, mean2, …], {metric_name: {metric_name: [var1, var2, …]}})

This will append the predictions

Parameters
Return type

Tuple[Dict[str, List[float]], Dict[str, Dict[str, List[float]]]]

Returns

TModelPredict with the new predictions appended.

Core Utils

class ax.core.utils.MissingMetrics(objective, outcome_constraints, tracking_metrics)[source]

Bases: tuple

property objective

Alias for field number 0

property outcome_constraints

Alias for field number 1

property tracking_metrics

Alias for field number 2

ax.core.utils.best_feasible_objective(optimization_config, values)[source]

Compute the best feasible objective value found by each iteration.

Parameters
  • optimization_config (OptimizationConfig) – Optimization config.

  • values (Dict[str, ndarray]) – Dictionary from metric name to array of value at each iteration. If optimization config contains outcome constraints, values for them must be present in values.

Returns: Array of cumulative best feasible value.

Return type

ndarray

ax.core.utils.get_missing_metrics(data, optimization_config)[source]

Return all arm_name, trial_index pairs, for which some of the observatins of optimization config metrics are missing.

Parameters
  • data (Data) – Data to search.

  • optimization_config (OptimizationConfig) – provides metric_names to search for.

Return type

MissingMetrics

Returns

A NamedTuple(missing_objective, Dict[str, missing_outcome_constraint])

ax.core.utils.get_missing_metrics_by_name(data, metric_names)[source]

Return all arm_name, trial_index pairs missing some observations of specified metrics.

Parameters
  • data (Data) – Data to search.

  • metric_names (Iterable[str]) – list of metrics to search for.

Return type

Dict[str, Set[Tuple[str, int]]]

Returns

A Dict[str, missing_metrics], one entry for each metric_name.

ax.core.utils.get_model_times(experiment)[source]

Get total times spent fitting the model and generating candidates in the course of the experiment.

Return type

Tuple[float, float]