ax.preview

A preview of future Ax API

IMetric

class ax.preview.api.protocols.metric.IMetric(name: str)[source]

Bases: _APIMetric

Metrics automate the process of fetching data from external systems. They are used in conjunction with Runners in the run_n_trials method to facilitate closed-loop experimentation.

fetch(trial_index: int, trial_metadata: Mapping[str, Any]) tuple[int, float | tuple[float, float]][source]

Given trial metadata (the mapping returned from IRunner.run), fetches readings for the metric.

Readings are returned as a pair (progression, outcome), where progression is an integer representing the progression of the trial (e.g. number of epochs for a training job, timestamp for a time series, etc.), and outcome is either direct reading or a (mean, sem) pair for the metric.

IRunner

class ax.preview.api.protocols.runner.IRunner[source]

Bases: _APIRunner

poll_trial(trial_index: int, trial_metadata: Mapping[str, Any]) TrialStatus[source]

Given trial index and metadata, poll the status of the trial.

run_trial(trial_index: int, parameterization: Mapping[str, int | float | str | bool]) dict[str, Any][source]

Given an index and parameterization, run a trial and return a dictionary of any appropriate metadata. This metadata will be used to identify the trial when polling its status, stopping, fetching data, etc. This may hold information such as the trial’s unique identifier on the system its running on, a directory where the trial is logging results to, etc.

The metadata MUST be JSON-serializable (i.e. dict, list, str, int, float, bool, or None) so that Trials may be properly serialized in Ax.

stop_trial(trial_index: int, trial_metadata: Mapping[str, Any]) dict[str, Any][source]

Given trial index and metadata, stop the trial. Returns a dictionary of any appropriate metadata.

The metadata MUST be JSON-serializable (i.e. dict, list, str, int, float, bool, or None) so that Trials may be properly serialized in Ax.

Utils

Client

class ax.preview.api.client.Client(storage_config: StorageConfig | None = None, random_seed: int | None = None)[source]

Bases: WithDBSettingsBase

attach_baseline(parameters: Mapping[str, int | float | str | bool], arm_name: str | None = None) int[source]

Attaches custom single-arm trial to an experiment specifically for use as the baseline or status quo in evaluating relative outcome constraints and improvement over baseline objective value. The trial will be marked as RUNNING and must be completed manually by the user.

Returns:

The index of the attached trial.

Saves to database on completion if storage_config is present.

attach_data(trial_index: int, raw_data: Mapping[str, float | tuple[float, float]], progression: int | None = None) None[source]

Attach data without indicating the trial is complete. Missing metrics are, allowed, and unexpected metric values will be added to the Experiment as tracking metrics. If progression is provided the Experiment will be updated to use MapData and the data will be attached to the appropriate step.

Saves to database on completion if storage_config is present.

attach_trial(parameters: Mapping[str, int | float | str | bool], arm_name: str | None = None) int[source]

Attach a single-arm trial to the experiment with the provided parameters. The trial will be marked as RUNNING and must be completed manually by the user.

Saves to database on completion if storage_config is present.

Returns:

The index of the attached trial.

complete_trial(trial_index: int, raw_data: Mapping[str, float | tuple[float, float]] | None = None, progression: int | None = None) TrialStatus[source]

Indicate the trial is complete while optionally attach data. In non-timeseries settings users should prefer to use complete_trial with raw_data over attach_data. Ax will determine the trial’s status automatically:

  • If all metrics on the OptimizationConfig are present the trial will be

    marked as COMPLETED

  • If any metrics on the OptimizationConfig are missing the trial will be

    marked as FAILED

Saves to database on completion if storage_config is present.

compute_analyses(analyses: Sequence[Analysis] | None = None) list[AnalysisCard][source]

Compute AnalysisCards (data about the optimization for end-user consumption) using the Experiment and GenerationStrategy. If no analyses are provided use some heuristic to determine which analyses to run. If some analyses fail, log failure and continue to compute the rest.

Note that the Analysis class is NOT part of the API and its methods are subject to change incompatibly between minor versions. Users are encouraged to use the provided analyses or leave this argument as None to use the default analyses.

Saves to database on completion if storage_config is present.

Returns:

A list of AnalysisCards.

configure_experiment(experiment_config: ExperimentConfig) None[source]

Given an ExperimentConfig, construct the Ax Experiment object. Note that validation occurs at time of config instantiation, not at configure_experiment.

This method only constitutes defining the search space and misc. metadata like name, description, and owners.

Saves to database on completion if storage_config is present.

configure_generation_strategy(generation_strategy_config: GenerationStrategyConfig) None[source]

Overwrite the existing GenerationStrategy by calling choose_gs using the arguments of the GenerationStrategyConfig as parameters.

Saves to database on completion if storage_config is present.

configure_metrics(metrics: Sequence[IMetric]) None[source]

Attach a class with logic for autmating fetching of a given metric by replacing its instance with the provided Metric from metrics sequence input, or adds the Metric provided to the Experiment as a tracking metric if that metric was not already present.

configure_optimization(objective: str, outcome_constraints: Sequence[str] | None = None) None[source]

Configures the goals of the optimization by setting the OptimizationConfig. Metrics referenced here by their name will be moved from the Experiment’s tracking_metrics if they were were already present (i.e. they were attached via configure_metrics) or added as base Metrics.

Parameters:
  • objective – Objective is a string and allows us to express single, scalarized, and multi-objective goals. Ex: “loss”, “ne1 + ne1”, “-ne, qps”

  • outcome_constraints – Outcome constraints are also strings and allow us to express a desire to have a metric clear a threshold but not be further optimized. These constraints are expressed as inequalities. Ex: “qps >= 100”, “0.5 * ne1 + 0.5 * ne2 >= 0.95”. To indicate a relative constraint multiply your bound by “baseline” Ex: “qps >= 0.95 * baseline” will constrain such that the QPS is at least 95% of the baseline arm’s QPS. Note that scalarized outcome constraints cannot be relative.

Saves to database on completion if storage_config is present.

configure_runner(runner: IRunner) None[source]

Attaches a Runner to the Experiment.

Saves to database on completion if storage_config is present.

get_best_parameterization(use_model_predictions: bool = True) tuple[Mapping[str, int | float | str | bool], Mapping[str, float | tuple[float, float]], int, str][source]

Identifies the best parameterization tried in the experiment so far, also called the best in-sample arm.

If use_model_predictions is True, first attempts to do so with the model used in optimization and its corresponding predictions if available. If use_model_predictions is False or attempts to use the model fails, falls back to the best raw objective based on the data fetched from the experiment.

Parameterizations which were observed to violate outcome constraints are not eligible to be the best parameterization.

Returns:

  • The parameters predicted to have the best optimization value without

    violating any outcome constraints.

  • The metric values for the best parameterization. Uses model prediction if

    use_model_predictions=True, otherwise returns observed data.

  • The trial which most recently ran the best parameterization

  • The name of the best arm (each trial has a unique name associated with

    each parameterization)

get_next_trials(maximum_trials: int = 1, fixed_parameters: Mapping[str, int | float | str | bool] | None = None) dict[int, Mapping[str, int | float | str | bool]][source]

Create up to maximum_trials trials using the GenerationStrategy, attach them to the Experiment, with status RUNNING, and return a mapping of trial index to its parameterization. If a partial parameterization is provided via fixed_parameters those parameters will be locked for all trials.

This will need to be rethought somewhat when we add support for BatchTrials, but will be okay for current supported functionality.

Saves to database on completion if storage_config is present.

Returns:

A mapping of trial index to parameterization.

get_pareto_frontier(use_model_predictions: bool = True) list[tuple[Mapping[str, int | float | str | bool], Mapping[str, float | tuple[float, float]], int, str]][source]

Identifies the parameterizations which are predicted to efficiently trade-off between all objectives in a multi-objective optimization, also called the in-sample Pareto frontier.

Returns:

  • The parameters predicted to have the best optimization value without

violating any outcome constraints. - The metric values for the best parameterization. Uses model

prediction if use_model_predictions=True, otherwise returns observed data.

  • The trial which most recently ran the best parameterization

  • The name of the best arm (each trial has a unique name associated

    with each parameterization).

Return type:

A list of tuples containing

classmethod load_from_database(experiment_name: str, storage_config: StorageConfig | None = None) Self[source]

Restore an AxClient and its state from database by the given name.

Returns:

The restored AxClient.

classmethod load_from_json_file(filepath: str = 'ax_client_snapshot.json', storage_config: StorageConfig | None = None) Self[source]

Restore a Client and its state from a JSON-serialized snapshot, residing in a .json file by the given path.

Returns:

The restored Client.

mark_trial_abandoned(trial_index: int) None[source]

Manually mark a trial as ABANDONED. ABANDONED trials are typically not able to be re-suggested by get_next_trials, though this is controlled by the GenerationStrategy.

Saves to database on completion if storage_config is present.

mark_trial_early_stopped(trial_index: int, raw_data: Mapping[str, float | tuple[float, float]], progression: int | None = None) None[source]

Manually mark a trial as EARLY_STOPPED while attaching the most recent data. This is used when the user has decided (with or without Ax’s recommendation) to stop the trial early. EARLY_STOPPED trials will not be re-suggested by get_next_trials.

Saves to database on completion if storage_config is present.

mark_trial_failed(trial_index: int) None[source]

Manually mark a trial as FAILED. FAILED trials typically may be re-suggested by get_next_trials, though this is controlled by the GenerationStrategy.

Saves to database on completion if storage_config is present.

predict(points: Sequence[Mapping[str, int | float | str | bool]]) list[Mapping[str, float | tuple[float, float]]][source]

Use the GenerationStrategy to predict the outcome of the provided list of parameterizations.

Returns:

A list of mappings from metric name to predicted mean and SEM

run_trials(maximum_trials: int, options: OrchestrationConfig) None[source]

Run maximum_trials trials in a loop by creating an ephemeral Scheduler under the hood using the Experiment, GenerationStrategy, Metrics, and Runner attached to this AxClient along with the provided OrchestrationConfig.

Saves to database on completion if storage_config is present.

save_to_json_file(filepath: str = 'ax_client_snapshot.json') None[source]

Save a JSON-serialized snapshot of this AxClient’s settings and state to a .json file by the given path.

set_early_stopping_strategy(early_stopping_strategy: BaseEarlyStoppingStrategy) None[source]

This method is not part of the API and is provided (without guarantees of method signature stability) for the convenience of some developers, power users, and partners.

Overwrite the existing EarlyStoppingStrategy with the provided EarlyStoppingStrategy.

Saves to database on completion if storage_config is present.

set_experiment(experiment: Experiment) None[source]

This method is not part of the API and is provided (without guarantees of method signature stability) for the convenience of some developers, power users, and partners.

Overwrite the existing Experiment with the provided Experiment.

Saves to database on completion if storage_config is present.

set_generation_strategy(generation_strategy: GenerationStrategy) None[source]

This method is not part of the API and is provided (without guarantees of method signature stability) for the convenience of some developers, power users, and partners.

Overwrite the existing GenerationStrategy with the provided GenerationStrategy.

Saves to database on completion if storage_config is present.

set_optimization_config(optimization_config: OptimizationConfig) None[source]

This method is not part of the API and is provided (without guarantees of method signature stability) for the convenience of some developers, power users, and partners.

Overwrite the existing OptimizationConfig with the provided OptimizationConfig.

Saves to database on completion if storage_config is present.

should_stop_trial_early(trial_index: int) bool[source]

Check if the trial should be stopped early. If True and the user wishes to heed Ax’s recommendation the user should manually stop the trial and call mark_trial_early_stopped(trial_index). The EarlyStoppingStrategy may be selected automatically or set manually via set_early_stopping_strategy.

Returns:

Whether the trial should be stopped early.

Configs

class ax.preview.api.configs.ChoiceParameterConfig(name: str, values: List[float] | List[int] | List[str] | List[bool], parameter_type: ParameterType, is_ordered: bool | None = None, dependent_parameters: Mapping[int | float | str | bool, Sequence[str]] | None = None)[source]

Bases: object

ChoiceParameterConfig allows users to specify the a discrete dimension of an experiment’s search space and will internally validate the inputs.

dependent_parameters: Mapping[int | float | str | bool, Sequence[str]] | None = None
is_ordered: bool | None = None
name: str
parameter_type: ParameterType
values: List[float] | List[int] | List[str] | List[bool]
class ax.preview.api.configs.ExperimentConfig(name: str, parameters: list[~ax.preview.api.configs.RangeParameterConfig | ~ax.preview.api.configs.ChoiceParameterConfig], parameter_constraints: list[str] = <factory>, description: str | None = None, experiment_type: str | None = None, owner: str | None = None)[source]

Bases: object

ExperimentConfig allows users to specify the SearchSpace of an experiment along with other metadata.

description: str | None = None
experiment_type: str | None = None
name: str
owner: str | None = None
parameter_constraints: list[str]
parameters: list[RangeParameterConfig | ChoiceParameterConfig]
class ax.preview.api.configs.GenerationMethod(value)[source]

Bases: Enum

An enum to specify the desired candidate generation method for the experiment. This is used in GenerationStrategyConfig, along with the properties of the experiment, to determine the generation strategy to use for candidate generation.

NOTE: New options should be rarely added to this enum. This is not intended to be a list of generation strategies for the user to choose from. Instead, this enum should only provide high level guidance to the underlying generation strategy dispatch logic, which is responsible for determinining the exact details.

Available options are:
BALANCED: A balanced generation method that may utilize (per-metric) model

selection to achieve a good model accuracy. This method excludes expensive methods, such as the fully Bayesian SAASBO model. Used by default.

FAST: A faster generation method that uses the built-in defaults from the

Modular BoTorch Model without any model selection.

RANDOM_SEARCH: Primarily intended for pure exploration experiments, this

method utilizes quasi-random Sobol sequences for candidate generation.

BALANCED = 'balanced'
FAST = 'fast'
class ax.preview.api.configs.GenerationStrategyConfig(method: GenerationMethod = GenerationMethod.BALANCED, initialization_budget: int | None = None, initialization_random_seed: int | None = None, use_existing_trials_for_initialization: bool = True, min_observed_initialization_trials: int | None = None, allow_exceeding_initialization_budget: bool = False, torch_device: str | None = None)[source]

Bases: object

A dataclass used to configure the generation strategy used in the experiment. This is used, along with the properties of the experiment, to determine the generation strategy to use for candidate generation.

Parameters:
  • method – The generation method to use. See GenerationMethod for more details.

  • initialization_budget – The number of trials to use for initialization. If None, a default budget of 5 trials is used.

  • initialization_random_seed – The random seed to use with the Sobol generator that generates the initialization trials.

  • use_existing_trials_for_initialization – Whether to count all trials attached to the experiment as part of the initialization budget. For example, if 2 trials were manually attached to the experiment and this option is set to True, we will only generate initialization_budget - 2 additional trials for initialization.

  • min_observed_initialization_trials – The minimum required number of initialization trials with observations before the generation strategy is allowed to transition away from the initialization phase. Defaults to max(1, initialization_budget // 2).

  • allow_exceeding_initialization_budget – This option determines the behavior of the generation strategy when the initialization_budget is exhausted and min_observed_initialization_trials is not met. If this is True, the generation strategy will generate additional initialization trials when a new trial is requested, exceeding the specified initialization_budget. If this is False, the generation strategy will raise an error and the candidate generation may be continued when additional data is observed for the existing trials.

  • torch_device – The device to use for model fitting and candidate generation in PyTorch / BoTorch based generation nodes. NOTE: This option is not validated. Please ensure that the string input corresponds to a valid device.

allow_exceeding_initialization_budget: bool = False
initialization_budget: int | None = None
initialization_random_seed: int | None = None
method: GenerationMethod = 'balanced'
min_observed_initialization_trials: int | None = None
torch_device: str | None = None
use_existing_trials_for_initialization: bool = True
class ax.preview.api.configs.OrchestrationConfig(parallelism: int = 1, tolerated_trial_failure_rate: float = 0.5, initial_seconds_between_polls: int = 1)[source]

Bases: object

initial_seconds_between_polls: int = 1
parallelism: int = 1
tolerated_trial_failure_rate: float = 0.5
class ax.preview.api.configs.ParameterScaling(value)[source]

Bases: Enum

The ParameterScaling enum allows users to specify which scaling to apply during candidate generation. This is useful for parameters that should not be explored on the same scale, such as learning rates and batch sizes.

LINEAR = 'linear'
LOG = 'log'
class ax.preview.api.configs.ParameterType(value)[source]

Bases: Enum

The ParameterType enum allows users to specify the type of a parameter.

BOOL = 'bool'
FLOAT = 'float'
INT = 'int'
STRING = 'str'
class ax.preview.api.configs.RangeParameterConfig(name: str, bounds: tuple[float, float], parameter_type: ParameterType, step_size: float | None = None, scaling: ParameterScaling | None = None)[source]

Bases: object

RangeParameterConfig allows users to specify the a continuous dimension of an experiment’s search space and will internally validate the inputs.

bounds: tuple[float, float]
name: str
parameter_type: ParameterType
scaling: ParameterScaling | None = None
step_size: float | None = None
class ax.preview.api.configs.StorageConfig(creator: Callable[..., Any] | None = None, url: str | None = None, registry_bundle: ax.storage.registry_bundle.RegistryBundleBase | None = None)[source]

Bases: object

creator: Callable[[...], Any] | None = None
registry_bundle: RegistryBundleBase | None = None
url: str | None = None

Types

From Config

ax.preview.api.utils.instantiation.from_config.experiment_from_config(config: ExperimentConfig) Experiment[source]

Create an Experiment from an ExperimentConfig.

ax.preview.api.utils.instantiation.from_config.parameter_from_config(config: RangeParameterConfig | ChoiceParameterConfig) Parameter[source]

Create a RangeParameter, ChoiceParameter, or FixedParameter from a ParameterConfig.

From String

ax.preview.api.utils.instantiation.from_string.optimization_config_from_string(objective_str: str, outcome_constraint_strs: Sequence[str] | None = None) OptimizationConfig[source]

Create an OptimizationConfig from objective and outcome constraint strings.

Note that outcome constraints may not be placed on the objective metric except in the multi-objective case where they will be converted to objective thresholds.

ax.preview.api.utils.instantiation.from_string.parse_objective(objective_str: str) Objective[source]

Parse an objective string into an Objective object using SymPy.

Currently only supports linear objectives of the form “a * x + b * y” and tuples of linear objectives.

ax.preview.api.utils.instantiation.from_string.parse_outcome_constraint(constraint_str: str) OutcomeConstraint[source]

Parse an outcome constraint string into an OutcomeConstraint object using SymPy. Currently only supports linear constraints of the form “a * x + b * y >= k” or “a * x + b * y <= k”.

To indicate a relative constraint (i.e. performance relative to some baseline) multiply your bound by “baseline”. For example “qps >= 0.95 * baseline” will constrain such that the QPS is at least 95% of the baseline arm’s QPS.

ax.preview.api.utils.instantiation.from_string.parse_parameter_constraint(constraint_str: str) ParameterConstraint[source]

Parse a parameter constraint string into a ParameterConstraint object using SymPy. Currently only supports linear constraints of the form “a * x + b * y >= k” or “a * x + b * y <= k”.

ModelBridge

Dispatch Utils

ax.preview.modelbridge.dispatch_utils.choose_generation_strategy(gs_config: GenerationStrategyConfig) GenerationStrategy[source]

Choose a generation strategy based on the properties of the experiment and the inputs provided in gs_config.

NOTE: The behavior of this function is subject to change. It will be updated to produce best general purpose generation strategies based on benchmarking results.

Parameters:

gs_config – A GenerationStrategyConfig object that informs the choice of generation strategy.

Returns:

A generation strategy.

Storage Utils

ax.preview.api.utils.storage.db_settings_from_storage_config(storage_config: StorageConfig) DBSettings[source]

Construct DBSettings (expected by WithDBSettingsBase) from StorageConfig.