ax.models

Base Models & Utilities

ax.models.base

class ax.models.base.Model[source]

Bases: object

Base class for an Ax model.

Note: the core methods each model has: fit, predict, gen, cross_validate, and best_point are not present in this base class, because the signatures for those methods vary based on the type of the model. This class only contains the methods that all models have in common and for which they all share the signature.

classmethod deserialize_state(serialized_state: dict[str, Any]) dict[str, Any][source]

Restores model’s state from its serialized form, to the format it expects to receive as kwargs.

feature_importances() Any[source]
classmethod serialize_state(raw_state: dict[str, Any]) dict[str, Any][source]

Serialized output of self._get_state to a JSON-ready dict. This may involve storing part of state in files / external storage and saving handles for that storage in the resulting serialized state.

ax.models.discrete_base module

class ax.models.discrete_base.DiscreteModel[source]

Bases: Model

This class specifies the interface for a model based on discrete parameters.

These methods should be implemented to have access to all of the features of Ax.

best_point(n: int, parameter_values: list[list[None | str | bool | float | int]], objective_weights: ndarray[Any, dtype[_ScalarType_co]] | None, outcome_constraints: tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] | None = None, fixed_features: dict[int, None | str | bool | float | int] | None = None, pending_observations: list[list[list[None | str | bool | float | int]]] | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) list[None | str | bool | float | int] | None[source]

Obtains the point that has the best value according to the model prediction and its model predictions.

Returns:

(1 x d) parameter value list representing the point with the best value according to the model prediction. None if this function is not implemented for the given model.

cross_validate(Xs_train: list[list[list[None | str | bool | float | int]]], Ys_train: list[list[float]], Yvars_train: list[list[float]], X_test: list[list[None | str | bool | float | int]], use_posterior_predictive: bool = False) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]][source]

Do cross validation with the given training and test sets.

Training set is given in the same format as to fit. Test set is given in the same format as to predict.

Parameters:
  • Xs_train – A list of m lists X of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome.

  • Ys_train – The corresponding list of m lists Y, each of length k_i, for each outcome.

  • Yvars_train – The variances of each entry in Ys, same shape.

  • X_test – List of the j parameterizations at which to make predictions.

  • use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).

Returns:

2-element tuple containing

  • (j x m) array of outcome predictions at X.

  • (j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

fit(Xs: list[list[list[None | str | bool | float | int]]], Ys: list[list[float]], Yvars: list[list[float]], parameter_values: list[list[None | str | bool | float | int]], outcome_names: list[str]) None[source]

Fit model to m outcomes.

Parameters:
  • Xs – A list of m lists X of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome.

  • Ys – The corresponding list of m lists Y, each of length k_i, for each outcome.

  • Yvars – The variances of each entry in Ys, same shape.

  • parameter_values – A list of possible values for each parameter.

  • outcome_names – A list of m outcome names.

gen(n: int, parameter_values: list[list[None | str | bool | float | int]], objective_weights: ndarray[Any, dtype[_ScalarType_co]] | None, outcome_constraints: tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] | None = None, fixed_features: dict[int, None | str | bool | float | int] | None = None, pending_observations: list[list[list[None | str | bool | float | int]]] | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) tuple[list[list[None | str | bool | float | int]], list[float], dict[str, Any]][source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • parameter_values – A list of possible values for each parameter.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • pending_observations – A list of m lists of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome i.

  • model_gen_options – A config dictionary that can contain model-specific options.

Returns:

2-element tuple containing

  • List of n generated points, where each point is represented by a list of parameter values.

  • List of weights for each of the n points.

predict(X: list[list[None | str | bool | float | int]]) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]][source]

Predict

Parameters:

X – List of the j parameterizations at which to make predictions.

Returns:

2-element tuple containing

  • (j x m) array of outcome predictions at X.

  • (j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

ax.models.torch_base module

class ax.models.torch_base.TorchGenResults(points: ~torch.Tensor, weights: ~torch.Tensor, gen_metadata: dict[str, ~typing.Any] = <factory>, candidate_metadata: list[dict[str, ~typing.Any] | None] | None = None)[source]

Bases: object

points: (n x d) Tensor of generated points. weights: n-tensor of weights for each point. gen_metadata: Generation metadata Dictionary of model-specific metadata for the given

generation candidates

candidate_metadata: list[dict[str, Any] | None] | None = None
gen_metadata: dict[str, Any]
points: Tensor
weights: Tensor
class ax.models.torch_base.TorchModel[source]

Bases: Model

This class specifies the interface for a torch-based model.

These methods should be implemented to have access to all of the features of Ax.

best_point(search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig) Tensor | None[source]

Identify the current best point, satisfying the constraints in the same format as to gen.

Return None if no such point can be identified.

Parameters:
  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

Returns:

d-tensor of the best point.

cross_validate(datasets: list[SupervisedDataset], X_test: Tensor, search_space_digest: SearchSpaceDigest, use_posterior_predictive: bool = False) tuple[Tensor, Tensor][source]

Do cross validation with the given training and test sets.

Training set is given in the same format as to fit. Test set is given in the same format as to predict.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • X_test – (j x d) tensor of the j points at which to make predictions.

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in X.

  • use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).

Returns:

2-element tuple containing

  • (j x m) tensor of outcome predictions at X.

  • (j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

device: device | None = None
dtype: dtype | None = None
evaluate_acquisition_function(X: Tensor, search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig, acq_options: dict[str, Any] | None = None) Tensor[source]

Evaluate the acquisition function on the candidate set X.

Parameters:
  • X – (j x d) tensor of the j points at which to evaluate the acquisition function.

  • search_space_digest – A dataclass used to compactly represent a search space.

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

  • acq_options – Keyword arguments used to contruct the acquisition function.

Returns:

A single-element tensor with the acquisition value for these points.

fit(datasets: list[SupervisedDataset], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None) None[source]

Fit model to m outcomes.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in the datasets.

  • candidate_metadata – Model-produced metadata for candidates, in the order corresponding to the Xs.

gen(n: int, search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig) TorchGenResults[source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

Returns:

A TorchGenResult container.

predict(X: Tensor) tuple[Tensor, Tensor][source]

Predict

Parameters:

X – (j x d) tensor of the j points at which to make predictions.

Returns:

2-element tuple containing

  • (j x m) tensor of outcome predictions at X.

  • (j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

update(datasets: list[SupervisedDataset], metric_names: list[str], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None) None[source]

Update the model.

Updating the model requires both existing and additional data. The data passed into this method will become the new training data.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome). None means that there is no additional data for the corresponding outcome.

  • metric_names – A list of metric names, with the i-th metric corresponding to the i-th dataset.

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in X.

  • candidate_metadata – Model-produced metadata for candidates, in the order corresponding to the Xs.

class ax.models.torch_base.TorchOptConfig(objective_weights: ~torch.Tensor, outcome_constraints: tuple[~torch.Tensor, ~torch.Tensor] | None = None, objective_thresholds: ~torch.Tensor | None = None, linear_constraints: tuple[~torch.Tensor, ~torch.Tensor] | None = None, fixed_features: dict[int, float] | None = None, pending_observations: list[~torch.Tensor] | None = None, model_gen_options: dict[str, int | float | str | ~botorch.acquisition.acquisition.AcquisitionFunction | list[str] | dict[int, ~typing.Any] | dict[str, ~typing.Any] | ~ax.core.optimization_config.OptimizationConfig | ~ax.models.winsorization_config.WinsorizationConfig | None] = <factory>, rounding_func: ~collections.abc.Callable[[~torch.Tensor], ~torch.Tensor] | None = None, opt_config_metrics: dict[str, ~ax.core.metric.Metric] = <factory>, is_moo: bool = False, risk_measure: ~botorch.acquisition.risk_measures.RiskMeasureMCObjective | None = None, fit_out_of_design: bool = False)[source]

Bases: object

Container for lightweight representation of optimization arguments.

This is used for communicating between modelbridge and models. This is an ephemeral object and not meant to be stored / serialized.

objective_weights

If doing multi-objective optimization, these denote which objectives should be maximized and which should be minimized. Otherwise, the objective is to maximize a weighted sum of the columns of f(x). These are the weights.

Type:

torch.Tensor

outcome_constraints

A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

Type:

tuple[torch.Tensor, torch.Tensor] | None

objective_thresholds

A tensor containing thresholds forming a reference point from which to calculate pareto frontier hypervolume. Points that do not dominate the objective_thresholds contribute nothing to hypervolume.

Type:

torch.Tensor | None

linear_constraints

A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b for feasible x.

Type:

tuple[torch.Tensor, torch.Tensor] | None

fixed_features

A map {feature_index: value} for features that should be fixed to a particular value during generation.

Type:

dict[int, float] | None

pending_observations

A list of m (k_i x d) feature tensors X for m outcomes and k_i pending observations for outcome i.

Type:

list[torch.Tensor] | None

model_gen_options

A config dictionary that can contain model-specific options. This commonly includes optimizer_kwargs, which often specifies the optimizer options to be passed to the optimizer while optimizing the acquisition function. These are generally expected to mimic the signature of optimize_acqf, though not all models may support all possible arguments and some models may support additional arguments that are not passed to the optimizer. While constructing a generation strategy, these options can be passed in as follows: >>> model_gen_kwargs = { >>> “model_gen_options”: { >>> “optimizer_kwargs”: { >>> “num_restarts”: 20, >>> “sequential”: False, >>> “options”: { >>> “batch_limit: 5, >>> “maxiter”: 200, >>> }, >>> }, >>> }, >>> }

Type:

dict[str, int | float | str | botorch.acquisition.acquisition.AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | ax.core.optimization_config.OptimizationConfig | ax.models.winsorization_config.WinsorizationConfig | None]

rounding_func

A function that rounds an optimization result appropriately (i.e., according to round-trip transformations).

Type:

collections.abc.Callable[[torch.Tensor], torch.Tensor] | None

opt_config_metrics

A dictionary of metrics that are included in the optimization config.

Type:

dict[str, ax.core.metric.Metric]

is_moo

A boolean denoting whether this is for an MOO problem.

Type:

bool

risk_measure

An optional risk measure, used for robust optimization.

Type:

botorch.acquisition.risk_measures.RiskMeasureMCObjective | None

fit_out_of_design: bool = False
fixed_features: dict[int, float] | None = None
is_moo: bool = False
linear_constraints: tuple[Tensor, Tensor] | None = None
model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None]
objective_thresholds: Tensor | None = None
objective_weights: Tensor
opt_config_metrics: dict[str, Metric]
outcome_constraints: tuple[Tensor, Tensor] | None = None
pending_observations: list[Tensor] | None = None
risk_measure: RiskMeasureMCObjective | None = None
rounding_func: Callable[[Tensor], Tensor] | None = None

ax.models.model_utils module

class ax.models.model_utils.TorchModelLike(*args, **kwargs)[source]

Bases: Protocol

A protocol that stands in for TorchModel like objects that have a predict method.

predict(X: Tensor) tuple[Tensor, Tensor][source]

Predicts outcomes given an input tensor.

Parameters:

X – A n x d tensor of input parameters.

Returns:

The predicted posterior mean as an n x o-dim tensor. Tensor: The predicted posterior covariance as a n x o x o-dim tensor.

Return type:

Tensor

ax.models.model_utils.add_fixed_features(tunable_points: ndarray[Any, dtype[_ScalarType_co]], d: int, fixed_features: dict[int, float] | None, tunable_feature_indices: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]][source]

Add fixed features to points in tunable space.

Parameters:
  • tunable_points – Points in tunable space.

  • d – Dimension of parameter space.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • tunable_feature_indices – Parameter indices (in d) which are tunable.

Returns:

Points in the full d-dimensional space, defined by bounds.

Return type:

points

ax.models.model_utils.all_ordinal_features_are_integer_valued(ssd: SearchSpaceDigest) bool[source]

Check if all ordinal features are integer-valued.

Parameters:

ssd – A SearchSpaceDigest.

Returns:

True if all ordinal features are integer-valued, False otherwise.

ax.models.model_utils.as_array(x: Tensor | ndarray | tuple[Tensor | ndarray, ...]) ndarray[Any, dtype[_ScalarType_co]] | tuple[ndarray[Any, dtype[_ScalarType_co]], ...][source]

Convert every item in a tuple of tensors/arrays into an array.

Parameters:

x – A tensor, array, or a tuple of potentially mixed tensors and arrays.

Returns:

x, with everything converted to array.

ax.models.model_utils.best_in_sample_point(Xs: list[Tensor] | list[ndarray[Any, dtype[_ScalarType_co]]], model: TorchModelLike, bounds: list[tuple[float, float]], objective_weights: Tensor | ndarray | None, outcome_constraints: tuple[Tensor | ndarray, Tensor | ndarray] | None = None, linear_constraints: tuple[Tensor | ndarray, Tensor | ndarray] | None = None, fixed_features: dict[int, float] | None = None, risk_measure: RiskMeasureMCObjective | None = None, options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) tuple[Tensor | ndarray, float] | None[source]

Select the best point that has been observed.

Implements two approaches to selecting the best point.

For both approaches, only points that satisfy parameter space constraints (bounds, linear_constraints, fixed_features) will be returned. Points must also be observed for all objective and constraint outcomes. Returned points may violate outcome constraints, depending on the method below.

1: Select the point that maximizes the expected utility (objective_weights^T posterior_objective_means - baseline) * Prob(feasible) Here baseline should be selected so that at least one point has positive utility. It can be specified in the options dict, otherwise min (objective_weights^T posterior_objective_means) will be used, where the min is over observed points.

2: Select the best-objective point that is feasible with at least probability p.

The following quantities may be specified in the options dict:

  • best_point_method: ‘max_utility’ (default) or ‘feasible_threshold’ to select between the two approaches described above.

  • utility_baseline: Value for the baseline used in max_utility approach. If not provided, defaults to min objective value.

  • probability_threshold: Threshold for the feasible_threshold approach. Defaults to p=0.95.

  • feasibility_mc_samples: Number of MC samples used for estimating the probability of feasibility (defaults 10k).

Parameters:
  • Xs – Training data for the points, among which to select the best.

  • model – A Torch model or Surrogate.

  • bounds – A list of (lower, upper) tuples for each feature.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value in the best point.

  • risk_measure – An optional risk measure for reporting best robust point.

  • options – A config dictionary with settings described above.

Returns:

  • d-array of the best point,

  • utility at the best point.

Return type:

A two-element tuple or None if no feasible point exist. In tuple

ax.models.model_utils.best_observed_point(model: TorchModelLike, bounds: list[tuple[float, float]], objective_weights: Tensor | ndarray | None, outcome_constraints: tuple[Tensor | ndarray, Tensor | ndarray] | None = None, linear_constraints: tuple[Tensor | ndarray, Tensor | ndarray] | None = None, fixed_features: dict[int, float] | None = None, risk_measure: RiskMeasureMCObjective | None = None, options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) Tensor | ndarray | None[source]

Select the best point that has been observed.

Implements two approaches to selecting the best point.

For both approaches, only points that satisfy parameter space constraints (bounds, linear_constraints, fixed_features) will be returned. Points must also be observed for all objective and constraint outcomes. Returned points may violate outcome constraints, depending on the method below.

1: Select the point that maximizes the expected utility (objective_weights^T posterior_objective_means - baseline) * Prob(feasible) Here baseline should be selected so that at least one point has positive utility. It can be specified in the options dict, otherwise min (objective_weights^T posterior_objective_means) will be used, where the min is over observed points.

2: Select the best-objective point that is feasible with at least probability p.

The following quantities may be specified in the options dict:

  • best_point_method: ‘max_utility’ (default) or ‘feasible_threshold’ to select between the two approaches described above.

  • utility_baseline: Value for the baseline used in max_utility approach. If not provided, defaults to min objective value.

  • probability_threshold: Threshold for the feasible_threshold approach. Defaults to p=0.95.

  • feasibility_mc_samples: Number of MC samples used for estimating the probability of feasibility (defaults 10k).

Parameters:
  • model – A Torch model or Surrogate.

  • bounds – A list of (lower, upper) tuples for each feature.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value in the best point.

  • risk_measure – An optional risk measure for reporting best robust point.

  • options – A config dictionary with settings described above.

Returns:

A d-array of the best point, or None if no feasible point exists.

ax.models.model_utils.check_duplicate(point: ndarray[Any, dtype[_ScalarType_co]], points: ndarray[Any, dtype[_ScalarType_co]]) bool[source]

Check if a point exists in another array.

Parameters:
  • point – Newly generated point to check.

  • points – Points previously generated.

Returns:

True if the point is contained in points, else False

ax.models.model_utils.check_param_constraints(linear_constraints: tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]], point: ndarray[Any, dtype[_ScalarType_co]]) tuple[bool, ndarray[Any, dtype[_ScalarType_co]]][source]

Check if a point satisfies parameter constraints.

Parameters:
  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • point – A candidate point in d-dimensional space, as a (1 x d) matrix.

Returns:

2-element tuple containing

  • Flag that is True if all constraints are satisfied by the point.

  • Indices of constraints which are violated by the point.

ax.models.model_utils.enumerate_discrete_combinations(discrete_choices: Mapping[int, Sequence[float]]) list[dict[int, float]][source]
ax.models.model_utils.filter_constraints_and_fixed_features(X: Tensor | ndarray, bounds: list[tuple[float, float]], linear_constraints: tuple[Tensor | ndarray, Tensor | ndarray] | None = None, fixed_features: dict[int, float] | None = None) Tensor | ndarray[source]

Filter points to those that satisfy bounds, linear_constraints, and fixed_features.

Parameters:
  • X – An tensor or array of points.

  • bounds – A list of (lower, upper) tuples for each feature.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value in the best point.

Returns:

Feasible points.

ax.models.model_utils.get_observed(Xs: list[Tensor] | list[ndarray[Any, dtype[_ScalarType_co]]], objective_weights: Tensor | ndarray, outcome_constraints: tuple[Tensor | ndarray, Tensor | ndarray] | None = None) Tensor | ndarray[source]

Filter points to those that are observed for objective outcomes and outcomes that show up in outcome_constraints (if there are any).

Parameters:
  • Xs – A list of m (k_i x d) feature matrices X. Number of rows k_i can vary from i=1,…,m.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

Returns:

Points observed for all objective outcomes and outcome constraints.

ax.models.model_utils.mk_discrete_choices(ssd: SearchSpaceDigest, fixed_features: Mapping[int, float] | None = None) Mapping[int, Sequence[float]][source]
ax.models.model_utils.rejection_sample(gen_unconstrained: Callable[[int, int, ndarray[Any, dtype[_ScalarType_co]], dict[int, float] | None], ndarray[Any, dtype[_ScalarType_co]]], n: int, d: int, tunable_feature_indices: ndarray[Any, dtype[_ScalarType_co]], linear_constraints: tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] | None = None, deduplicate: bool = False, max_draws: int | None = None, fixed_features: dict[int, float] | None = None, rounding_func: Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[_ScalarType_co]]] | None = None, existing_points: ndarray[Any, dtype[_ScalarType_co]] | None = None) tuple[ndarray[Any, dtype[_ScalarType_co]], int][source]

Rejection sample in parameter space. Parameter space is typically [0, 1] for all tunable parameters.

Models must implement a gen_unconstrained method in order to support rejection sampling via this utility.

Parameters:
  • gen_unconstrained – A callable that generates unconstrained points in the parameter space. This is typically the _gen_unconstrained method of a RandomModel.

  • n – Number of samples to generate.

  • d – Dimensionality of the parameter space.

  • tunable_feature_indices – Indices of the tunable features in the parameter space.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • deduplicate – If true, reject points that are duplicates of previously generated points. The points are deduplicated after applying the rounding function.

  • max_draws – Maximum number of attemped draws before giving up.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • rounding_func – A function that rounds an optimization result appropriately (e.g., according to round-trip transformations).

  • existing_points – A set of previously generated points to use for deduplication. These should be provided in the parameter space model operates in.

ax.models.model_utils.tunable_feature_indices(bounds: list[tuple[float, float]], fixed_features: dict[int, float] | None = None) ndarray[Any, dtype[_ScalarType_co]][source]

Get the feature indices of tunable features.

Parameters:
  • bounds – A list of (lower, upper) tuples for each column of X.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

Returns:

The indices of tunable features.

ax.models.model_utils.validate_bounds(bounds: list[tuple[float, float]], fixed_feature_indices: ndarray[Any, dtype[_ScalarType_co]]) None[source]

Ensure the requested space is [0,1]^d.

Parameters:
  • bounds – A list of d (lower, upper) tuples for each column of X.

  • fixed_feature_indices – Indices of features which are fixed at a particular value.

ax.models.types

ax.models.winsorization_config module

class ax.models.winsorization_config.WinsorizationConfig(lower_quantile_margin: float = 0.0, upper_quantile_margin: float = 0.0, lower_boundary: float | None = None, upper_boundary: float | None = None)[source]

Bases: object

Dataclass for storing Winsorization configuration parameters

Attributes: lower_quantile_margin: Winsorization will increase any metric value below this

quantile to this quantile’s value.

upper_quantile_margin: Winsorization will decrease any metric value above this

quantile to this quantile’s value. NOTE: this quantile will be inverted before any operations, e.g., a value of 0.2 will decrease values above the 80th percentile to the value of the 80th percentile.

lower_boundary: If this value is lesser than the metric value corresponding to

lower_quantile_margin, set metric values below lower_boundary to lower_boundary and leave larger values unaffected.

upper_boundary: If this value is greater than the metric value corresponding to

upper_quantile_margin, set metric values above upper_boundary to upper_boundary and leave smaller values unaffected.

lower_boundary: float | None = None
lower_quantile_margin: float = 0.0
upper_boundary: float | None = None
upper_quantile_margin: float = 0.0

Discrete Models

ax.models.discrete.eb_thompson module

class ax.models.discrete.eb_thompson.EmpiricalBayesThompsonSampler(num_samples: int = 10000, min_weight: float | None = None, uniform_weights: bool = False)[source]

Bases: ThompsonSampler

Generator for Thompson sampling using Empirical Bayes estimates.

The generator applies positive-part James-Stein Estimator to the data passed in via fit and then performs Thompson Sampling.

ax.models.discrete.full_factorial module

class ax.models.discrete.full_factorial.FullFactorialGenerator(max_cardinality: int = 100, check_cardinality: bool = True)[source]

Bases: DiscreteModel

Generator for full factorial designs.

Generates arms for all possible combinations of parameter values, each with weight 1.

The value of n supplied to gen will be ignored, as the number of arms generated is determined by the list of parameter values. To suppress this warning, use n = -1.

gen(n: int, parameter_values: list[list[None | str | bool | float | int]], objective_weights: ndarray[Any, dtype[_ScalarType_co]] | None, outcome_constraints: tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] | None = None, fixed_features: dict[int, None | str | bool | float | int] | None = None, pending_observations: list[list[list[None | str | bool | float | int]]] | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) tuple[list[list[None | str | bool | float | int]], list[float], dict[str, Any]][source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • parameter_values – A list of possible values for each parameter.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • pending_observations – A list of m lists of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome i.

  • model_gen_options – A config dictionary that can contain model-specific options.

Returns:

2-element tuple containing

  • List of n generated points, where each point is represented by a list of parameter values.

  • List of weights for each of the n points.

ax.models.discrete.thompson module

class ax.models.discrete.thompson.ThompsonSampler(num_samples: int = 10000, min_weight: float | None = None, uniform_weights: bool = False)[source]

Bases: DiscreteModel

Generator for Thompson sampling.

The generator performs Thompson sampling on the data passed in via fit. Arms are given weight proportional to the probability that they are winners, according to Monte Carlo simulations.

fit(Xs: list[list[list[None | str | bool | float | int]]], Ys: list[list[float]], Yvars: list[list[float]], parameter_values: list[list[None | str | bool | float | int]], outcome_names: list[str]) None[source]

Fit model to m outcomes.

Parameters:
  • Xs – A list of m lists X of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome.

  • Ys – The corresponding list of m lists Y, each of length k_i, for each outcome.

  • Yvars – The variances of each entry in Ys, same shape.

  • parameter_values – A list of possible values for each parameter.

  • outcome_names – A list of m outcome names.

gen(n: int, parameter_values: list[list[None | str | bool | float | int]], objective_weights: ndarray[Any, dtype[_ScalarType_co]] | None, outcome_constraints: tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] | None = None, fixed_features: dict[int, None | str | bool | float | int] | None = None, pending_observations: list[list[list[None | str | bool | float | int]]] | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) tuple[list[list[None | str | bool | float | int]], list[float], dict[str, Any]][source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • parameter_values – A list of possible values for each parameter.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • pending_observations – A list of m lists of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome i.

  • model_gen_options – A config dictionary that can contain model-specific options.

Returns:

2-element tuple containing

  • List of n generated points, where each point is represented by a list of parameter values.

  • List of weights for each of the n points.

predict(X: list[list[None | str | bool | float | int]]) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]][source]

Predict

Parameters:

X – List of the j parameterizations at which to make predictions.

Returns:

2-element tuple containing

  • (j x m) array of outcome predictions at X.

  • (j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

Random Models

ax.models.random.base module

class ax.models.random.base.RandomModel(deduplicate: bool = True, seed: int | None = None, init_position: int = 0, generated_points: ndarray[Any, dtype[_ScalarType_co]] | None = None, fallback_to_sample_polytope: bool = False)[source]

Bases: Model

This class specifies the basic skeleton for a random model.

As random generators do not make use of models, they do not implement the fit or predict methods.

These models do not need data, or optimization configs.

To satisfy search space parameter constraints, these models can use rejection sampling. To enable rejection sampling for a subclass, only only _gen_samples needs to be implemented, or alternatively, _gen_unconstrained/gen can be directly implemented.

deduplicate

If True (defaults to True), a single instantiation of the model will not return the same point twice. This flag is used in rejection sampling.

seed

An optional seed value for scrambling.

init_position

The initial state of the generator. This is the number of samples to fast-forward before generating new samples. Used to ensure that the re-loaded generator will continue generating from the same sequence rather than starting from scratch.

generated_points

A set of previously generated points to use for deduplication. These should be provided in the raw transformed space the model operates in.

fallback_to_sample_polytope

If True, when rejection sampling fails, we fall back to the HitAndRunPolytopeSampler.

gen(n: int, bounds: list[tuple[float, float]], linear_constraints: tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] | None = None, fixed_features: dict[int, float] | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None, rounding_func: Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[_ScalarType_co]]] | None = None) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]][source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • bounds – A list of (lower, upper) tuples for each column of X. Defined on [0, 1]^d.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • model_gen_options – A config dictionary that is passed along to the model.

  • rounding_func – A function that rounds an optimization result appropriately (e.g., according to round-trip transformations).

Returns:

2-element tuple containing

  • (n x d) array of generated points.

  • Uniform weights, an n-array of ones for each point.

ax.models.random.uniform module

class ax.models.random.uniform.UniformGenerator(deduplicate: bool = True, seed: int | None = None, init_position: int = 0, generated_points: ndarray[Any, dtype[_ScalarType_co]] | None = None, fallback_to_sample_polytope: bool = False)[source]

Bases: RandomModel

This class specifies a uniform random generation algorithm.

As a uniform generator does not make use of a model, it does not implement the fit or predict methods.

See base RandomModel for a description of model attributes.

ax.models.random.sobol module

class ax.models.random.sobol.SobolGenerator(deduplicate: bool = True, seed: int | None = None, init_position: int = 0, scramble: bool = True, generated_points: ndarray[Any, dtype[_ScalarType_co]] | None = None, fallback_to_sample_polytope: bool = False)[source]

Bases: RandomModel

This class specifies the generation algorithm for a Sobol generator.

As Sobol does not make use of a model, it does not implement the fit or predict methods.

scramble

If True, permutes the parameter values among the elements of the Sobol sequence. Default is True.

See base `RandomModel` for a description of remaining attributes.
property engine: SobolEngine | None

Return a singleton SobolEngine.

gen(n: int, bounds: list[tuple[float, float]], linear_constraints: tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] | None = None, fixed_features: dict[int, float] | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None, rounding_func: Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[_ScalarType_co]]] | None = None) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]][source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • bounds – A list of (lower, upper) tuples for each column of X.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • rounding_func – A function that rounds an optimization result appropriately (e.g., according to round-trip transformations).

Returns:

2-element tuple containing

  • (n x d) array of generated points.

  • Uniform weights, an n-array of ones for each point.

init_engine(n_tunable_features: int) SobolEngine[source]

Initialize singleton SobolEngine, only on gen.

Parameters:

n_tunable_features – The number of features which can be searched over.

Returns:

SobolEngine, which can generate Sobol points.

Torch Models & Utilities

ax.models.torch.botorch module

class ax.models.torch.botorch.BotorchModel(model_constructor: ~collections.abc.Callable[[list[~torch.Tensor], list[~torch.Tensor], list[~torch.Tensor], list[int], list[int], list[str], dict[str, ~torch.Tensor] | None, ~typing.Any], ~botorch.models.model.Model] = <function get_and_fit_model>, model_predictor: ~collections.abc.Callable[[~botorch.models.model.Model, ~torch.Tensor, bool], tuple[~torch.Tensor, ~torch.Tensor]] = <function predict_from_model>, acqf_constructor: ~ax.models.torch.botorch_defaults.TAcqfConstructor = <function get_qLogNEI>, acqf_optimizer: ~collections.abc.Callable[[~botorch.acquisition.acquisition.AcquisitionFunction, ~torch.Tensor, int, list[tuple[~torch.Tensor, ~torch.Tensor, float]] | None, list[tuple[~torch.Tensor, ~torch.Tensor, float]] | None, dict[int, float] | None, ~collections.abc.Callable[[~torch.Tensor], ~torch.Tensor] | None, ~typing.Any], tuple[~torch.Tensor, ~torch.Tensor]] = <function scipy_optimizer>, best_point_recommender: ~collections.abc.Callable[[~ax.models.torch_base.TorchModel, list[tuple[float, float]], ~torch.Tensor, tuple[~torch.Tensor, ~torch.Tensor] | None, tuple[~torch.Tensor, ~torch.Tensor] | None, dict[int, float] | None, dict[str, int | float | str | ~botorch.acquisition.acquisition.AcquisitionFunction | list[str] | dict[int, ~typing.Any] | dict[str, ~typing.Any] | ~ax.core.optimization_config.OptimizationConfig | ~ax.models.winsorization_config.WinsorizationConfig | None] | None, dict[int, float] | None], ~torch.Tensor | None] = <function recommend_best_observed_point>, refit_on_cv: bool = False, warm_start_refitting: bool = True, use_input_warping: bool = False, use_loocv_pseudo_likelihood: bool = False, prior: dict[str, ~typing.Any] | None = None, **kwargs: ~typing.Any)[source]

Bases: TorchModel

Customizable botorch model.

By default, this uses a noisy Log Expected Improvement (qLogNEI) acquisition function on top of a model made up of separate GPs, one for each outcome. This behavior can be modified by providing custom implementations of the following components:

  • a model_constructor that instantiates and fits a model on data

  • a model_predictor that predicts outcomes using the fitted model

  • a acqf_constructor that creates an acquisition function from a fitted model

  • a acqf_optimizer that optimizes the acquisition function

  • a best_point_recommender that recommends a current “best” point (i.e.,

    what the model recommends if the learning process ended now)

Parameters:
  • model_constructor – A callable that instantiates and fits a model on data, with signature as described below.

  • model_predictor – A callable that predicts using the fitted model, with signature as described below.

  • acqf_constructor – A callable that creates an acquisition function from a fitted model, with signature as described below.

  • acqf_optimizer – A callable that optimizes the acquisition function, with signature as described below.

  • best_point_recommender – A callable that recommends the best point, with signature as described below.

  • refit_on_cv – If True, refit the model for each fold when performing cross-validation.

  • warm_start_refitting – If True, start model refitting from previous model parameters in order to speed up the fitting process.

  • prior

    An optional dictionary that contains the specification of GP model prior. Currently, the keys include: - covar_module_prior: prior on covariance matrix e.g.

    {“lengthscale_prior”: GammaPrior(3.0, 6.0)}.

    • type: type of prior on task covariance matrix e.g.`LKJCovariancePrior`.

    • sd_prior: A scalar prior over nonnegative numbers, which is used for the

      default LKJCovariancePrior task_covar_prior.

    • eta: The eta parameter on the default LKJ task_covar_prior.

Call signatures:

model_constructor(
    Xs,
    Ys,
    Yvars,
    task_features,
    fidelity_features,
    metric_names,
    state_dict,
    **kwargs,
) -> model

Here Xs, Ys, Yvars are lists of tensors (one element per outcome), task_features identifies columns of Xs that should be modeled as a task, fidelity_features is a list of ints that specify the positions of fidelity parameters in ‘Xs’, metric_names provides the names of each Y in Ys, state_dict is a pytorch module state dict, and model is a BoTorch Model. Optional kwargs are being passed through from the BotorchModel constructor. This callable is assumed to return a fitted BoTorch model that has the same dtype and lives on the same device as the input tensors.

model_predictor(model, X) -> [mean, cov]

Here model is a fitted botorch model, X is a tensor of candidate points, and mean and cov are the posterior mean and covariance, respectively.

acqf_constructor(
    model,
    objective_weights,
    outcome_constraints,
    X_observed,
    X_pending,
    **kwargs,
) -> acq_function

Here model is a botorch Model, objective_weights is a tensor of weights for the model outputs, outcome_constraints is a tuple of tensors describing the (linear) outcome constraints, X_observed are previously observed points, and X_pending are points whose evaluation is pending. acq_function is a BoTorch acquisition function crafted from these inputs. For additional details on the arguments, see get_qLogNEI.

acqf_optimizer(
    acq_function,
    bounds,
    n,
    inequality_constraints,
    equality_constraints,
    fixed_features,
    rounding_func,
    **kwargs,
) -> candidates

Here acq_function is a BoTorch AcquisitionFunction, bounds is a tensor containing bounds on the parameters, n is the number of candidates to be generated, inequality_constraints are inequality constraints on parameter values, fixed_features specifies features that should be fixed during generation, and rounding_func is a callback that rounds an optimization result appropriately. candidates is a tensor of generated candidates. For additional details on the arguments, see scipy_optimizer.

best_point_recommender(
    model,
    bounds,
    objective_weights,
    outcome_constraints,
    linear_constraints,
    fixed_features,
    model_gen_options,
    target_fidelities,
) -> candidates

Here model is a TorchModel, bounds is a list of tuples containing bounds on the parameters, objective_weights is a tensor of weights for the model outputs, outcome_constraints is a tuple of tensors describing the (linear) outcome constraints, linear_constraints is a tuple of tensors describing constraints on the design, fixed_features specifies features that should be fixed during generation, model_gen_options is a config dictionary that can contain model-specific options, and target_fidelities is a map from fidelity feature column indices to their respective target fidelities, used for multi-fidelity optimization problems. % TODO: refer to an example.

Xs: list[Tensor]
Ys: list[Tensor]
Yvars: list[Tensor]
best_point(search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig) Tensor | None[source]

Identify the current best point, satisfying the constraints in the same format as to gen.

Return None if no such point can be identified.

Parameters:
  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

Returns:

d-tensor of the best point.

cross_validate(datasets: list[SupervisedDataset], X_test: Tensor, use_posterior_predictive: bool = False, **kwargs: Any) tuple[Tensor, Tensor][source]

Do cross validation with the given training and test sets.

Training set is given in the same format as to fit. Test set is given in the same format as to predict.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • X_test – (j x d) tensor of the j points at which to make predictions.

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in X.

  • use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).

Returns:

2-element tuple containing

  • (j x m) tensor of outcome predictions at X.

  • (j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

feature_importances() ndarray[Any, dtype[_ScalarType_co]][source]
fit(datasets: list[SupervisedDataset], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None) None[source]

Fit model to m outcomes.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in the datasets.

  • candidate_metadata – Model-produced metadata for candidates, in the order corresponding to the Xs.

gen(n: int, search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig) TorchGenResults[source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

Returns:

A TorchGenResult container.

property model: Model
predict(X: Tensor) tuple[Tensor, Tensor][source]

Predict

Parameters:

X – (j x d) tensor of the j points at which to make predictions.

Returns:

2-element tuple containing

  • (j x m) tensor of outcome predictions at X.

  • (j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

property search_space_digest: SearchSpaceDigest
ax.models.torch.botorch.get_feature_importances_from_botorch_model(model: Model | ModuleList | None) ndarray[Any, dtype[_ScalarType_co]][source]

Get feature importances from a list of BoTorch models.

Parameters:

models – BoTorch model to get feature importances from.

Returns:

The feature importances as a numpy array where each row sums to 1.

ax.models.torch.botorch.get_rounding_func(rounding_func: Callable[[Tensor], Tensor] | None) Callable[[Tensor], Tensor] | None[source]

ax.models.torch.botorch_defaults module

class ax.models.torch.botorch_defaults.TAcqfConstructor(*args, **kwargs)[source]

Bases: Protocol

ax.models.torch.botorch_defaults.get_NEI() None[source]

TAcqfConstructor instantiating qNEI. See docstring of get_qEI for details.

ax.models.torch.botorch_defaults.get_acqf(acquisition_function_name: str) Callable[[Callable[[], None]], TAcqfConstructor][source]

Returns a decorator whose wrapper function instantiates an acquisition function.

NOTE: This is a decorator factory instead of a simple factory as serialization of Botorch model kwargs requires callables to be have module-level paths, and closures created by a simple factory do not have such paths. We solve this by wrapping “empty” module-level functions with this decorator, we ensure that they are serialized correctly, in addition to reducing code duplication.

Example

>>> @get_acqf("qEI")
... def get_qEI() -> None:
...     pass
>>> acqf = get_qEI(
...     model=model,
...     objective_weights=objective_weights,
...     outcome_constraints=outcome_constraints,
...     X_observed=X_observed,
...     X_pending=X_pending,
...     **kwargs,
... )
>>> type(acqf)
... botorch.acquisition.monte_carlo.qExpectedImprovement
Parameters:

acquisition_function_name – The name of the acquisition function to be instantiated by the returned function.

Returns:

A decorator whose wrapper function is a TAcqfConstructor, i.e. it requires a model, objective_weights, and optional outcome_constraints, X_observed, and X_pending as inputs, as well as kwargs, and returns an AcquisitionFunction instance that corresponds to acquisition_function_name.

ax.models.torch.botorch_defaults.get_and_fit_model(Xs: list[Tensor], Ys: list[Tensor], Yvars: list[Tensor], task_features: list[int], fidelity_features: list[int], metric_names: list[str], state_dict: dict[str, Tensor] | None = None, refit_model: bool = True, use_input_warping: bool = False, use_loocv_pseudo_likelihood: bool = False, prior: dict[str, Any] | None = None, *, multitask_gp_ranks: dict[str, Prior | float] | None = None, **kwargs: Any) GPyTorchModel[source]

Instantiates and fits a botorch GPyTorchModel using the given data. N.B. Currently, the logic for choosing ModelListGP vs other models is handled using if-else statements in lines 96-137. In the future, this logic should be taken care of by modular botorch.

Parameters:
  • Xs – List of X data, one tensor per outcome.

  • Ys – List of Y data, one tensor per outcome.

  • Yvars – List of observed variance of Ys.

  • task_features – List of columns of X that are tasks.

  • fidelity_features – List of columns of X that are fidelity parameters.

  • metric_names – Names of each outcome Y in Ys.

  • state_dict – If provided, will set model parameters to this state dictionary. Otherwise, will fit the model.

  • refit_model – Flag for refitting model.

  • prior

    Optional[Dict]. A dictionary that contains the specification of GP model prior. Currently, the keys include: - covar_module_prior: prior on covariance matrix e.g.

    {“lengthscale_prior”: GammaPrior(3.0, 6.0)}.

    • type: type of prior on task covariance matrix e.g.`LKJCovariancePrior`.

    • sd_prior: A scalar prior over nonnegative numbers, which is used for the

      default LKJCovariancePrior task_covar_prior.

    • eta: The eta parameter on the default LKJ task_covar_prior.

  • kwargs – Passed to _get_model.

Returns:

A fitted GPyTorchModel.

ax.models.torch.botorch_defaults.get_qEI() None[source]

A TAcqfConstructor to instantiate a qEI acquisition function. The function body is filled in by the decorator function get_acqf to simultaneously reduce code duplication and allow serialization in Ax. TODO: Deprecate with legacy Ax model.

ax.models.torch.botorch_defaults.get_qLogEI() None[source]

TAcqfConstructor instantiating qLogEI. See docstring of get_qEI for details.

ax.models.torch.botorch_defaults.get_qLogNEI() None[source]

TAcqfConstructor instantiating qLogNEI. See docstring of get_qEI for details.

ax.models.torch.botorch_defaults.get_warping_transform(d: int, batch_shape: Size | None = None, task_feature: int | None = None) Warp[source]

Construct input warping transform.

Parameters:
  • d – The dimension of the input, including task features

  • batch_shape – The batch_shape of the model

  • task_feature – The index of the task feature

Returns:

The input warping transform.

ax.models.torch.botorch_defaults.recommend_best_observed_point(model: TorchModel, bounds: list[tuple[float, float]], objective_weights: Tensor, outcome_constraints: tuple[Tensor, Tensor] | None = None, linear_constraints: tuple[Tensor, Tensor] | None = None, fixed_features: dict[int, float] | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None, target_fidelities: dict[int, float] | None = None) Tensor | None[source]

A wrapper around ax.models.model_utils.best_observed_point for TorchModel that recommends a best point from previously observed points using either a “max_utility” or “feasible_threshold” strategy.

Parameters:
  • model – A TorchModel.

  • bounds – A list of (lower, upper) tuples for each column of X.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value in the best point.

  • model_gen_options – A config dictionary that can contain model-specific options. See TorchOptConfig for details.

  • target_fidelities – A map {feature_index: value} of fidelity feature column indices to their respective target fidelities. Used for multi-fidelity optimization.

Returns:

A d-array of the best point, or None if no feasible point was observed.

ax.models.torch.botorch_defaults.recommend_best_out_of_sample_point(model: TorchModel, bounds: list[tuple[float, float]], objective_weights: Tensor, outcome_constraints: tuple[Tensor, Tensor] | None = None, linear_constraints: tuple[Tensor, Tensor] | None = None, fixed_features: dict[int, float] | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None, target_fidelities: dict[int, float] | None = None) Tensor | None[source]

Identify the current best point by optimizing the posterior mean of the model. This is “out-of-sample” because it considers un-observed designs as well.

Return None if no such point can be identified.

Parameters:
  • model – A TorchModel.

  • bounds – A list of (lower, upper) tuples for each column of X.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value in the best point.

  • model_gen_options – A config dictionary that can contain model-specific options. See TorchOptConfig for details.

  • target_fidelities – A map {feature_index: value} of fidelity feature column indices to their respective target fidelities. Used for multi-fidelity optimization.

Returns:

A d-array of the best point, or None if no feasible point exists.

ax.models.torch.botorch_defaults.scipy_optimizer(acq_function: AcquisitionFunction, bounds: Tensor, n: int, inequality_constraints: list[tuple[Tensor, Tensor, float]] | None = None, equality_constraints: list[tuple[Tensor, Tensor, float]] | None = None, fixed_features: dict[int, float] | None = None, rounding_func: Callable[[Tensor], Tensor] | None = None, *, num_restarts: int = 20, raw_samples: int | None = None, joint_optimization: bool = False, options: dict[str, bool | float | int | str] | None = None) tuple[Tensor, Tensor][source]

Optimizer using scipy’s minimize module on a numpy-adpator.

Parameters:
  • acq_function – A botorch AcquisitionFunction.

  • bounds – A 2 x d-dim tensor, where bounds[0] (bounds[1]) are the lower (upper) bounds of the feasible hyperrectangle.

  • n – The number of candidates to generate.

  • constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs

  • constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an equality constraint of the form sum_i (X[indices[i]] * coefficients[i]) == rhs

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • rounding_func – A function that rounds an optimization result appropriately (i.e., according to round-trip transformations).

Returns:

2-element tuple containing

  • A n x d-dim tensor of generated candidates.

  • In the case of joint optimization, a scalar tensor containing the joint acquisition value of the n points. In the case of sequential optimization, a n-dim tensor of conditional acquisition values, where i-th element is the expected acquisition value conditional on having observed candidates 0,1,…,i-1.

ax.models.torch.botorch_moo module

class ax.models.torch.botorch_moo.MultiObjectiveBotorchModel(model_constructor: ~collections.abc.Callable[[list[~torch.Tensor], list[~torch.Tensor], list[~torch.Tensor], list[int], list[int], list[str], dict[str, ~torch.Tensor] | None, ~typing.Any], ~botorch.models.model.Model] = <function get_and_fit_model>, model_predictor: ~collections.abc.Callable[[~botorch.models.model.Model, ~torch.Tensor, bool], tuple[~torch.Tensor, ~torch.Tensor]] = <function predict_from_model>, acqf_constructor: ~ax.models.torch.botorch_defaults.TAcqfConstructor = <function get_qLogNEHVI>, acqf_optimizer: ~collections.abc.Callable[[~botorch.acquisition.acquisition.AcquisitionFunction, ~torch.Tensor, int, list[tuple[~torch.Tensor, ~torch.Tensor, float]] | None, list[tuple[~torch.Tensor, ~torch.Tensor, float]] | None, dict[int, float] | None, ~collections.abc.Callable[[~torch.Tensor], ~torch.Tensor] | None, ~typing.Any], tuple[~torch.Tensor, ~torch.Tensor]] = <function scipy_optimizer>, best_point_recommender: ~collections.abc.Callable[[~ax.models.torch_base.TorchModel, list[tuple[float, float]], ~torch.Tensor, tuple[~torch.Tensor, ~torch.Tensor] | None, tuple[~torch.Tensor, ~torch.Tensor] | None, dict[int, float] | None, dict[str, int | float | str | ~botorch.acquisition.acquisition.AcquisitionFunction | list[str] | dict[int, ~typing.Any] | dict[str, ~typing.Any] | ~ax.core.optimization_config.OptimizationConfig | ~ax.models.winsorization_config.WinsorizationConfig | None] | None, dict[int, float] | None], ~torch.Tensor | None] = <function recommend_best_observed_point>, frontier_evaluator: ~collections.abc.Callable[[~ax.models.torch_base.TorchModel, ~torch.Tensor, ~torch.Tensor | None, ~torch.Tensor | None, ~torch.Tensor | None, ~torch.Tensor | None, tuple[~torch.Tensor, ~torch.Tensor] | None], tuple[~torch.Tensor, ~torch.Tensor, ~torch.Tensor]] = <function pareto_frontier_evaluator>, refit_on_cv: bool = False, warm_start_refitting: bool = False, use_input_warping: bool = False, use_loocv_pseudo_likelihood: bool = False, prior: dict[str, ~typing.Any] | None = None, **kwargs: ~typing.Any)[source]

Bases: BotorchModel

Customizable multi-objective model.

By default, this uses an Expected Hypervolume Improvment function to find the pareto frontier of a function with multiple outcomes. This behavior can be modified by providing custom implementations of the following components:

  • a model_constructor that instantiates and fits a model on data

  • a model_predictor that predicts outcomes using the fitted model

  • a acqf_constructor that creates an acquisition function from a fitted model

  • a acqf_optimizer that optimizes the acquisition function

Parameters:
  • model_constructor – A callable that instantiates and fits a model on data, with signature as described below.

  • model_predictor – A callable that predicts using the fitted model, with signature as described below.

  • acqf_constructor – A callable that creates an acquisition function from a fitted model, with signature as described below.

  • acqf_optimizer – A callable that optimizes an acquisition function, with signature as described below.

Call signatures:

model_constructor(
    Xs,
    Ys,
    Yvars,
    task_features,
    fidelity_features,
    metric_names,
    state_dict,
    **kwargs,
) -> model

Here Xs, Ys, Yvars are lists of tensors (one element per outcome), task_features identifies columns of Xs that should be modeled as a task, fidelity_features is a list of ints that specify the positions of fidelity parameters in ‘Xs’, metric_names provides the names of each Y in Ys, state_dict is a pytorch module state dict, and model is a BoTorch Model. Optional kwargs are being passed through from the BotorchModel constructor. This callable is assumed to return a fitted BoTorch model that has the same dtype and lives on the same device as the input tensors.

model_predictor(model, X) -> [mean, cov]

Here model is a fitted botorch model, X is a tensor of candidate points, and mean and cov are the posterior mean and covariance, respectively.

acqf_constructor(
    model,
    objective_weights,
    outcome_constraints,
    X_observed,
    X_pending,
    **kwargs,
) -> acq_function

Here model is a botorch Model, objective_weights is a tensor of weights for the model outputs, outcome_constraints is a tuple of tensors describing the (linear) outcome constraints, X_observed are previously observed points, and X_pending are points whose evaluation is pending. acq_function is a BoTorch acquisition function crafted from these inputs. For additional details on the arguments, see get_qLogNEHVI.

acqf_optimizer(
    acq_function,
    bounds,
    n,
    inequality_constraints,
    fixed_features,
    rounding_func,
    **kwargs,
) -> candidates

Here acq_function is a BoTorch AcquisitionFunction, bounds is a tensor containing bounds on the parameters, n is the number of candidates to be generated, inequality_constraints are inequality constraints on parameter values, fixed_features specifies features that should be fixed during generation, and rounding_func is a callback that rounds an optimization result appropriately. candidates is a tensor of generated candidates. For additional details on the arguments, see scipy_optimizer.

frontier_evaluator(
    model,
    objective_weights,
    objective_thresholds,
    X,
    Y,
    Yvar,
    outcome_constraints,
)

Here model is a botorch Model, objective_thresholds is used in hypervolume evaluations, objective_weights is a tensor of weights applied to the objectives (sign represents direction), X, Y, Yvar are tensors, outcome_constraints is a tuple of tensors describing the (linear) outcome constraints.

Xs: list[Tensor]
Ys: list[Tensor]
Yvars: list[Tensor]
gen(n: int, search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig) TorchGenResults[source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

Returns:

A TorchGenResult container.

ax.models.torch.botorch_moo_defaults module

References

[Daulton2020qehvi]

S. Daulton, M. Balandat, and E. Bakshy. Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization. Advances in Neural Information Processing Systems 33, 2020.

[Daulton2021nehvi]

S. Daulton, M. Balandat, and E. Bakshy. Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement. Advances in Neural Information Processing Systems 34, 2021.

[Ament2023logei]

S. Ament, S. Daulton, D. Eriksson, M. Balandat, and E. Bakshy. Unexpected Improvements to Expected Improvement for Bayesian Optimization. Advances in Neural Information Processing Systems 36, 2023.

ax.models.torch.botorch_moo_defaults.get_EHVI(model: Model, objective_weights: Tensor, objective_thresholds: Tensor, outcome_constraints: tuple[Tensor, Tensor] | None = None, X_observed: Tensor | None = None, X_pending: Tensor | None = None, *, mc_samples: int = 128, alpha: float | None = None, seed: int | None = None) qExpectedHypervolumeImprovement[source]

Instantiates a qExpectedHyperVolumeImprovement acquisition function.

Parameters:
  • model – The underlying model which the acqusition function uses to estimate acquisition values of candidates.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • objective_thresholds – A tensor containing thresholds forming a reference point from which to calculate pareto frontier hypervolume. Points that do not dominate the objective_thresholds contribute nothing to hypervolume.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)

  • X_observed – A tensor containing points observed for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).

  • X_pending – A tensor containing points whose evaluation is pending (i.e. that have been submitted for evaluation) present for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).

  • mc_samples – The number of MC samples to use (default: 512).

  • alpha – The hyperparameter controlling the approximate non-dominated partitioning. The default value of 0.0 means an exact partitioning is used. As the number of objectives m increases, consider increasing this parameter in order to limit computational complexity.

  • seed – The random seed for generating random starting points for optimization.

Returns:

The instantiated acquisition function.

Return type:

qExpectedHypervolumeImprovement

ax.models.torch.botorch_moo_defaults.get_NEHVI(model: Model, objective_weights: Tensor, objective_thresholds: Tensor, outcome_constraints: tuple[Tensor, Tensor] | None = None, X_observed: Tensor | None = None, X_pending: Tensor | None = None, *, prune_baseline: bool = True, mc_samples: int = 128, alpha: float | None = None, marginalize_dim: int | None = None, cache_root: bool = True, seed: int | None = None) qNoisyExpectedHypervolumeImprovement[source]

Instantiates a qNoisyExpectedHyperVolumeImprovement acquisition function.

Parameters:
  • model – The underlying model which the acqusition function uses to estimate acquisition values of candidates.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)

  • X_observed – A tensor containing points observed for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).

  • X_pending – A tensor containing points whose evaluation is pending (i.e. that have been submitted for evaluation) present for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).

  • prune_baseline – If True, prune the baseline points for NEI (default: True).

  • mc_samples – The number of MC samples to use (default: 512).

  • alpha – The hyperparameter controlling the approximate non-dominated partitioning. The default value of 0.0 means an exact partitioning is used. As the number of objectives m increases, consider increasing this parameter in order to limit computational complexity (default: None).

  • marginalize_dim – The dimension along which to marginalize over, used for fully Bayesian models (default: None).

  • cache_root – If True, cache the root of the covariance matrix (default: True).

  • seed – The random seed for generating random starting points for optimization ( default: None).

Returns:

The instantiated acquisition function.

Return type:

qNoisyExpectedHyperVolumeImprovement

ax.models.torch.botorch_moo_defaults.get_qLogEHVI(model: Model, objective_weights: Tensor, objective_thresholds: Tensor, outcome_constraints: tuple[Tensor, Tensor] | None = None, X_observed: Tensor | None = None, X_pending: Tensor | None = None, *, mc_samples: int = 128, alpha: float | None = None, seed: int | None = None) qLogExpectedHypervolumeImprovement[source]

Instantiates a qLogExpectedHyperVolumeImprovement acquisition function.

Parameters:
  • model – The underlying model which the acqusition function uses to estimate acquisition values of candidates.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • objective_thresholds – A tensor containing thresholds forming a reference point from which to calculate pareto frontier hypervolume. Points that do not dominate the objective_thresholds contribute nothing to hypervolume.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)

  • X_observed – A tensor containing points observed for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).

  • X_pending – A tensor containing points whose evaluation is pending (i.e. that have been submitted for evaluation) present for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).

  • mc_samples – The number of MC samples to use (default: 512).

  • alpha – The hyperparameter controlling the approximate non-dominated partitioning. The default value of 0.0 means an exact partitioning is used. As the number of objectives m increases, consider increasing this parameter in order to limit computational complexity.

  • seed – The random seed for generating random starting points for optimization.

Returns:

The instantiated acquisition function.

Return type:

qLogExpectedHypervolumeImprovement

ax.models.torch.botorch_moo_defaults.get_qLogNEHVI(model: Model, objective_weights: Tensor, objective_thresholds: Tensor, outcome_constraints: tuple[Tensor, Tensor] | None = None, X_observed: Tensor | None = None, X_pending: Tensor | None = None, *, prune_baseline: bool = True, mc_samples: int = 128, alpha: float | None = None, marginalize_dim: int | None = None, cache_root: bool = True, seed: int | None = None) qLogNoisyExpectedHypervolumeImprovement[source]

Instantiates a qLogNoisyExpectedHyperVolumeImprovement acquisition function.

Parameters:
  • model – The underlying model which the acqusition function uses to estimate acquisition values of candidates.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)

  • X_observed – A tensor containing points observed for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).

  • X_pending – A tensor containing points whose evaluation is pending (i.e. that have been submitted for evaluation) present for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).

  • prune_baseline – If True, prune the baseline points for NEI (default: True).

  • mc_samples – The number of MC samples to use (default: 512).

  • alpha – The hyperparameter controlling the approximate non-dominated partitioning. The default value of 0.0 means an exact partitioning is used. As the number of objectives m increases, consider increasing this parameter in order to limit computational complexity (default: None).

  • marginalize_dim – The dimension along which to marginalize over, used for fully Bayesian models (default: None).

  • cache_root – If True, cache the root of the covariance matrix (default: True).

  • seed – The random seed for generating random starting points for optimization ( default: None).

Returns:

The instantiated acquisition function.

Return type:

qLogNoisyExpectedHyperVolumeImprovement

ax.models.torch.botorch_moo_defaults.get_weighted_mc_objective_and_objective_thresholds(objective_weights: Tensor, objective_thresholds: Tensor) tuple[WeightedMCMultiOutputObjective, Tensor][source]

Construct weighted objective and apply the weights to objective thresholds.

Parameters:
  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • objective_thresholds – A tensor containing thresholds forming a reference point from which to calculate pareto frontier hypervolume. Points that do not dominate the objective_thresholds contribute nothing to hypervolume.

Returns:

  • The objective

  • The objective thresholds

Return type:

A two-element tuple with the objective and objective thresholds

ax.models.torch.botorch_moo_defaults.infer_objective_thresholds(model: Model, objective_weights: Tensor, bounds: list[tuple[float, float]] | None = None, outcome_constraints: tuple[Tensor, Tensor] | None = None, linear_constraints: tuple[Tensor, Tensor] | None = None, fixed_features: dict[int, float] | None = None, subset_idcs: Tensor | None = None, Xs: list[Tensor] | None = None, X_observed: Tensor | None = None, objective_thresholds: Tensor | None = None) Tensor[source]

Infer objective thresholds.

This method uses the model-estimated Pareto frontier over the in-sample points to infer absolute (not relativized) objective thresholds.

This uses a heuristic that sets the objective threshold to be a scaled nadir point, where the nadir point is scaled back based on the range of each objective across the current in-sample Pareto frontier.

See botorch.utils.multi_objective.hypervolume.infer_reference_point for details on the heuristic.

Parameters:
  • model – A fitted botorch Model.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights. These should not be subsetted.

  • bounds – A list of (lower, upper) tuples for each column of X.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. These should not be subsetted.

  • linear_constraints – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • subset_idcs – The indices of the outcomes that are modeled by the provided model. If subset_idcs not None, this method infers whether the model is subsetted.

  • Xs – A list of m (k_i x d) feature tensors X. Number of rows k_i can vary from i=1,…,m.

  • X_observed – A n x d-dim tensor of in-sample points to use for determining the current in-sample Pareto frontier.

  • objective_thresholds – Any known objective thresholds to pass to infer_reference_point heuristic. This should not be subsetted. If only a subset of the objectives have known thresholds, the remaining objectives should be NaN. If no objective threshold was provided, this can be None.

Returns:

A m-dim tensor of objective thresholds, where the objective

threshold is nan if the outcome is not an objective.

ax.models.torch.botorch_moo_defaults.pareto_frontier_evaluator(model: TorchModel | None, objective_weights: Tensor, objective_thresholds: Tensor | None = None, X: Tensor | None = None, Y: Tensor | None = None, Yvar: Tensor | None = None, outcome_constraints: tuple[Tensor, Tensor] | None = None) tuple[Tensor, Tensor, Tensor][source]

Return outcomes predicted to lie on a pareto frontier.

Given a model and points to evaluate, use the model to predict which points lie on the Pareto frontier.

Parameters:
  • model – Model used to predict outcomes.

  • objective_weights – A m tensor of values indicating the weight to put on different outcomes. For pareto frontiers only the sign matters.

  • objective_thresholds – A tensor containing thresholds forming a reference point from which to calculate pareto frontier hypervolume. Points that do not dominate the objective_thresholds contribute nothing to hypervolume.

  • X – A n x d tensor of features to evaluate.

  • Y – A n x m tensor of outcomes to use instead of predictions.

  • Yvar – A n x m x m tensor of input covariances (NaN if unobserved).

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.

Returns:

3-element tuple containing

  • A j x m tensor of outcome on the pareto frontier. j is the number

    of frontier points.

  • A j x m x m tensor of predictive covariances.

    cov[j, m1, m2] is Cov[m1@j, m2@j].

  • A j tensor of the index of each frontier point in the input Y.

ax.models.torch.botorch_moo_defaults.scipy_optimizer_list(acq_function_list: list[AcquisitionFunction], bounds: Tensor, inequality_constraints: list[tuple[Tensor, Tensor, float]] | None = None, fixed_features: dict[int, float] | None = None, rounding_func: Callable[[Tensor], Tensor] | None = None, num_restarts: int = 20, raw_samples: int | None = None, options: dict[str, bool | float | int | str] | None = None) tuple[Tensor, Tensor][source]

Sequential optimizer using scipy’s minimize module on a numpy-adaptor.

The ith acquisition in the sequence uses the ith given acquisition_function.

Parameters:
  • acq_function_list – A list of botorch AcquisitionFunctions, optimized sequentially.

  • bounds – A 2 x d-dim tensor, where bounds[0] (bounds[1]) are the lower (upper) bounds of the feasible hyperrectangle.

  • n – The number of candidates to generate.

  • constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • rounding_func – A function that rounds an optimization result appropriately (i.e., according to round-trip transformations).

Returns:

2-element tuple containing

  • A n x d-dim tensor of generated candidates.

  • A n-dim tensor of conditional acquisition values, where i-th element is the expected acquisition value conditional on having observed candidates 0,1,…,i-1.

ax.models.torch.botorch_modular.acquisition module

class ax.models.torch.botorch_modular.acquisition.Acquisition(surrogate: Surrogate, search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig, botorch_acqf_class: type[AcquisitionFunction], options: dict[str, Any] | None = None)[source]

Bases: Base

All classes in ‘botorch_modular’ directory are under construction, incomplete, and should be treated as alpha versions only.

Ax wrapper for BoTorch AcquisitionFunction, subcomponent of BoTorchModel and is not meant to be used outside of it.

Parameters:
  • surrogate – The Surrogate model, with which this acquisition function will be used.

  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

  • botorch_acqf_class – Type of BoTorch AcquisitionFunction that should be used.

  • options – Optional mapping of kwargs to the underlying Acquisition Function in BoTorch.

acqf: AcquisitionFunction
property botorch_acqf_class: type[AcquisitionFunction]

BoTorch AcquisitionFunction class underlying this Acquisition.

property device: device | None

Torch device type of the tensors in the training data used in the model, of which this Acquisition is a subcomponent.

property dtype: dtype | None

Torch data type of the tensors in the training data used in the model, of which this Acquisition is a subcomponent.

evaluate(X: Tensor) Tensor[source]

Evaluate the acquisition function on the candidate set X.

Parameters:

X – A batch_shape x q x d-dim Tensor of t-batches with q d-dim design points each.

Returns:

A batch_shape’-dim Tensor of acquisition values at the given design points X, where batch_shape’ is the broadcasted batch shape of model and input X.

get_botorch_objective_and_transform(botorch_acqf_class: type[AcquisitionFunction], model: Model, objective_weights: Tensor, objective_thresholds: Tensor | None = None, outcome_constraints: tuple[Tensor, Tensor] | None = None, X_observed: Tensor | None = None, risk_measure: RiskMeasureMCObjective | None = None) tuple[MCAcquisitionObjective | None, PosteriorTransform | None][source]
property objective_thresholds: Tensor | None

The objective thresholds for all outcomes.

For non-objective outcomes, the objective thresholds are nans.

property objective_weights: Tensor | None

The objective weights for all outcomes.

optimize(n: int, search_space_digest: SearchSpaceDigest, inequality_constraints: list[tuple[Tensor, Tensor, float]] | None = None, fixed_features: dict[int, float] | None = None, rounding_func: Callable[[Tensor], Tensor] | None = None, optimizer_options: dict[str, Any] | None = None) tuple[Tensor, Tensor, Tensor][source]

Generate a set of candidates via multi-start optimization. Obtains candidates and their associated acquisition function values.

Parameters:
  • n – The number of candidates to generate.

  • search_space_digest – A SearchSpaceDigest object containing search space properties, e.g. bounds for optimization.

  • inequality_constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • rounding_func – A function that post-processes an optimization result appropriately. This is typically passed down from ModelBridge to ensure compatibility of the candidates with with Ax transforms. For additional post processing, use post_processing_func option in optimizer_options.

  • optimizer_options – Options for the optimizer function, e.g. sequential or raw_samples. This can also include a post_processing_func which is applied to the candidates before the rounding_func. post_processing_func can be used to support more customized options that typically only exist in MBM, such as BoTorch transforms. See the docstring of TorchOptConfig for more information on passing down these options while constructing a generation strategy.

Returns:

A three-element tuple containing an n x d-dim tensor of generated candidates, a tensor with the associated acquisition values, and a tensor with the weight for each candidate.

options: dict[str, Any]
surrogate: Surrogate

ax.models.torch.randomforest module

class ax.models.torch.randomforest.RandomForest(max_features: str | None = 'sqrt', num_trees: int = 500)[source]

Bases: TorchModel

A Random Forest model.

Uses a parametric bootstrap to handle uncertainty in Y.

Can be used to fit data, make predictions, and do cross validation; however gen is not implemented and so this model cannot generate new points.

Parameters:
  • max_features – Maximum number of features at each split. With one-hot encoding, this should be set to None. Defaults to “sqrt”, which is Breiman’s version of Random Forest.

  • num_trees – Number of trees.

cross_validate(datasets: list[SupervisedDataset], X_test: Tensor, use_posterior_predictive: bool = False) tuple[Tensor, Tensor][source]

Do cross validation with the given training and test sets.

Training set is given in the same format as to fit. Test set is given in the same format as to predict.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • X_test – (j x d) tensor of the j points at which to make predictions.

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in X.

  • use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).

Returns:

2-element tuple containing

  • (j x m) tensor of outcome predictions at X.

  • (j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

fit(datasets: list[SupervisedDataset], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None) None[source]

Fit model to m outcomes.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in the datasets.

  • candidate_metadata – Model-produced metadata for candidates, in the order corresponding to the Xs.

predict(X: Tensor) tuple[Tensor, Tensor][source]

Predict

Parameters:

X – (j x d) tensor of the j points at which to make predictions.

Returns:

2-element tuple containing

  • (j x m) tensor of outcome predictions at X.

  • (j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

ax.models.torch.botorch_modular.model module

class ax.models.torch.botorch_modular.model.BoTorchModel(surrogate_spec: SurrogateSpec | None = None, surrogate_specs: Mapping[str, SurrogateSpec] | None = None, surrogate: Surrogate | None = None, acquisition_class: type[Acquisition] | None = None, acquisition_options: dict[str, Any] | None = None, botorch_acqf_class: type[AcquisitionFunction] | None = None, refit_on_cv: bool = False, warm_start_refit: bool = True)[source]

Bases: TorchModel, Base

All classes in ‘botorch_modular’ directory are under construction, incomplete, and should be treated as alpha versions only.

Modular Model class for combining BoTorch subcomponents in Ax. Specified via Surrogate and Acquisition, which wrap BoTorch Model and AcquisitionFunction, respectively, for convenient use in Ax.

Parameters:
  • acquisition_class – Type of Acquisition to be used in this model, auto-selected based on experiment and data if not specified.

  • acquisition_options – Optional dict of kwargs, passed to the constructor of BoTorch AcquisitionFunction.

  • botorch_acqf_class – Type of AcquisitionFunction to be used in this model, auto-selected based on experiment and data if not specified.

  • surrogate_spec – An optional SurrogateSpec object specifying how to construct the Surrogate and the underlying BoTorch Model.

  • surrogate_specs – DEPRECATED. Please use surrogate_spec instead.

  • surrogate – In lieu of SurrogateSpec, an instance of Surrogate may be provided. In most cases, surrogate_spec should be used instead.

  • refit_on_cv – Whether to reoptimize model parameters during call to BoTorchmodel.cross_validate.

  • warm_start_refit – Whether to load parameters from either the provided state dict or the state dict of the current BoTorch Model during refitting. If False, model parameters will be reoptimized from scratch on refit. NOTE: This setting is ignored during cross_validate if refit_on_cv is False.

property Xs: list[Tensor]

A list of tensors, each of shape batch_shape x n_i x d, where n_i is the number of training inputs for the i-th model.

NOTE: This is an accessor for self.surrogate.Xs and returns it unchanged.

acquisition_class: type[Acquisition]
acquisition_options: dict[str, Any]
best_point(search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig) Tensor | None[source]

Identify the current best point, satisfying the constraints in the same format as to gen.

Return None if no such point can be identified.

Parameters:
  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

Returns:

d-tensor of the best point.

property botorch_acqf_class: type[AcquisitionFunction]

BoTorch AcquisitionFunction class, associated with this model. Raises an error if one is not yet set.

cross_validate(datasets: Sequence[SupervisedDataset], X_test: Tensor, search_space_digest: SearchSpaceDigest, use_posterior_predictive: bool = False, **additional_model_inputs: Any) tuple[Tensor, Tensor][source]

Do cross validation with the given training and test sets.

Training set is given in the same format as to fit. Test set is given in the same format as to predict.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • X_test – (j x d) tensor of the j points at which to make predictions.

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in X.

  • use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).

Returns:

2-element tuple containing

  • (j x m) tensor of outcome predictions at X.

  • (j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].

property device: device

Torch device type of the tensors in the training data used in the model, of which this Acquisition is a subcomponent.

property dtype: dtype

Torch data type of the tensors in the training data used in the model, of which this Acquisition is a subcomponent.

evaluate_acquisition_function(X: Tensor, search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig, acq_options: dict[str, Any] | None = None) Tensor[source]

Evaluate the acquisition function on the candidate set X.

Parameters:
  • X – (j x d) tensor of the j points at which to evaluate the acquisition function.

  • search_space_digest – A dataclass used to compactly represent a search space.

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

  • acq_options – Keyword arguments used to contruct the acquisition function.

Returns:

A single-element tensor with the acquisition value for these points.

feature_importances() ndarray[Any, dtype[_ScalarType_co]][source]

Compute feature importances from the model.

This assumes that we can get model lengthscales from either covar_module.base_kernel.lengthscale or covar_module.lengthscale.

Returns:

The feature importances as a numpy array of size len(metrics) x 1 x dim where each row sums to 1.

fit(datasets: Sequence[SupervisedDataset], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None, state_dict: OrderedDict[str, Tensor] | None = None, refit: bool = True, **additional_model_inputs: Any) None[source]

Fit model to m outcomes.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one or more outcomes.

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in the datasets.

  • candidate_metadata – Model-produced metadata for candidates, in the order corresponding to the Xs.

  • state_dict – An optional model statedict for the underlying Surrogate. Primarily used in BoTorchModel.cross_validate.

  • refit – Whether to re-optimize model parameters.

  • additional_model_inputs – Additional kwargs to pass to the model input constructor in Surrogate.fit.

gen(n: int, search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig) TorchGenResults[source]

Generate new candidates.

Parameters:
  • n – Number of candidates to generate.

  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

Returns:

A TorchGenResult container.

predict(X: Tensor, use_posterior_predictive: bool = False) tuple[Tensor, Tensor][source]

Predicts, potentially from multiple surrogates.

Parameters:
  • X – (n x d) Tensor of input locations.

  • use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).

Returns: Tuple of tensors: (n x m) mean, (n x m x m) covariance.

property search_space_digest: SearchSpaceDigest
property surrogate: Surrogate

Returns the Surrogate, if it has been constructed.

surrogate_spec: SurrogateSpec | None

ax.models.torch.botorch_modular.multi_fidelity module

ax.models.torch.botorch_modular.optimizer_argparse module

ax.models.torch.botorch_modular.optimizer_argparse.optimizer_argparse(acqf: AcquisitionFunction, *, optimizer: str, optimizer_options: dict[str, Any] | None = None) dict[str, Any][source]

Extract the kwargs to be passed to a BoTorch optimizer.

Parameters:
  • acqf – The acquisition function being optimized.

  • optimizer – The optimizer to parse args for. Typically chosen by Acquisition.optimize. Must be one of: - “optimize_acqf”, - “optimize_acqf_discrete_local_search”, - “optimize_acqf_discrete”, - “optimize_acqf_homotopy”, - “optimize_acqf_mixed”, - “optimize_acqf_mixed_alternating”.

  • optimizer_options

    An optional dictionary of optimizer options (some of these under an options dictionary); default values will be used where not specified. See the docstrings in botorch/optim/optimize.py for supported options. .. rubric:: Example

    >>> optimizer_options = {
    >>>     "num_restarts": 20,
    >>>     "options": {
    >>>         "maxiter": 200,
    >>>         "batch_limit": 5,
    >>>     },
    >>>     "retry_on_optimization_warning": False,
    >>> }
    

ax.models.torch.botorch_modular.sebo module

ax.models.torch.botorch_modular.sebo.L1_norm_func(X: Tensor, init_point: Tensor) Tensor[source]

L1_norm takes in a a batch_shape x n x d-dim input tensor X to a batch_shape x n x 1-dimensional L1 norm tensor. To be used for constructing a GenericDeterministicModel.

class ax.models.torch.botorch_modular.sebo.SEBOAcquisition(surrogate: Surrogate, search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig, botorch_acqf_class: type[AcquisitionFunction], options: dict[str, Any] | None = None)[source]

Bases: Acquisition

Implement the acquisition function of Sparsity Exploring Bayesian Optimization (SEBO).

The SEBO is a hyperparameter-free method to simultaneously maximize a target objective and sparsity. When L0 norm is used, SEBO uses a novel differentiable relaxation based on homotopy continuation to efficiently optimize for sparsity.

optimize(n: int, search_space_digest: SearchSpaceDigest, inequality_constraints: list[tuple[Tensor, Tensor, float]] | None = None, fixed_features: dict[int, float] | None = None, rounding_func: Callable[[Tensor], Tensor] | None = None, optimizer_options: dict[str, Any] | None = None) tuple[Tensor, Tensor, Tensor][source]

Generate a set of candidates via multi-start optimization. Obtains candidates and their associated acquisition function values.

Parameters:
  • n – The number of candidates to generate.

  • search_space_digest – A SearchSpaceDigest object containing search space properties, e.g. bounds for optimization.

  • inequality_constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.

  • fixed_features – A map {feature_index: value} for features that should be fixed to a particular value during generation.

  • rounding_func – A function that post-processes an optimization result appropriately (i.e., according to round-trip transformations).

  • optimizer_options – Options for the optimizer function, e.g. sequential or raw_samples.

Returns:

A three-element tuple containing an n x d-dim tensor of generated candidates, a tensor with the associated acquisition values, and a tensor with the weight for each candidate.

ax.models.torch.botorch_modular.sebo.clamp_to_target(X: Tensor, target_point: Tensor, clamp_tol: float) Tensor[source]

Clamp generated candidates within the given ranges to the target point.

Parameters:
  • X – A batch_shape x n x d-dim input tensor X.

  • target_point – A tensor of size d corresponding to the target point.

  • clamp_tol – The clamping tolerance. Any value within clamp_tol of the target_point will be clamped to the target_point.

ax.models.torch.botorch_modular.sebo.get_batch_initial_conditions(acq_function: AcquisitionFunction, raw_samples: int, X_pareto: Tensor, target_point: Tensor, bounds: Tensor, num_restarts: int = 20) Tensor[source]

Generate starting points for the SEBO acquisition function optimization.

ax.models.torch.botorch_modular.surrogate module

class ax.models.torch.botorch_modular.surrogate.Surrogate(surrogate_spec: ~ax.models.torch.botorch_modular.surrogate.SurrogateSpec | None = None, botorch_model_class: type[~botorch.models.model.Model] | None = None, model_options: dict[str, ~typing.Any] | None = None, mll_class: type[~gpytorch.mlls.marginal_log_likelihood.MarginalLogLikelihood] = <class 'gpytorch.mlls.exact_marginal_log_likelihood.ExactMarginalLogLikelihood'>, mll_options: dict[str, ~typing.Any] | None = None, outcome_transform_classes: list[type[~botorch.models.transforms.outcome.OutcomeTransform]] | None = None, outcome_transform_options: dict[str, dict[str, ~typing.Any]] | None = None, input_transform_classes: list[type[~botorch.models.transforms.input.InputTransform]] | None = None, input_transform_options: dict[str, dict[str, ~typing.Any]] | None = None, covar_module_class: type[~gpytorch.kernels.kernel.Kernel] | None = None, covar_module_options: dict[str, ~typing.Any] | None = None, likelihood_class: type[~gpytorch.likelihoods.likelihood.Likelihood] | None = None, likelihood_options: dict[str, ~typing.Any] | None = None, allow_batched_models: bool = True, refit_on_cv: bool = False, metric_to_best_model_config: dict[tuple[str], ~ax.models.torch.botorch_modular.utils.ModelConfig] | None = None)[source]

Bases: Base

All classes in ‘botorch_modular’ directory are under construction, incomplete, and should be treated as alpha versions only.

Ax wrapper for BoTorch Model, subcomponent of BoTorchModel and is not meant to be used outside of it.

Parameters:
  • botorch_model_classModel class to be used as the underlying BoTorch model. If None is provided a model class will be selected (either one for all outcomes or a ModelList with separate models for each outcome) will be selected automatically based off the datasets at construct time. This argument is deprecated in favor of model_configs.

  • model_options – Dictionary of options / kwargs for the BoTorch Model constructed during Surrogate.fit. Note that the corresponding attribute will later be updated to include any additional kwargs passed into BoTorchModel.fit. This argument is deprecated in favor of model_configs.

  • mll_classMarginalLogLikelihood class to use for model-fitting. This argument is deprecated in favor of model_configs.

  • mll_options – Dictionary of options / kwargs for the MLL. This argument is deprecated in favor of model_configs.

  • outcome_transform_classes – List of BoTorch outcome transforms classes. Passed down to the BoTorch Model. Multiple outcome transforms can be chained together using ChainedOutcomeTransform. This argument is deprecated in favor of model_configs.

  • outcome_transform_options

    Outcome transform classes kwargs. The keys are class string names and the values are dictionaries of outcome transform kwargs. For example, ` outcome_transform_classes = [Standardize] outcome_transform_options = {

    ”Standardize”: {“m”: 1},

    ` For more options see botorch/models/transforms/outcome.py. This argument

    is deprecated in favor of model_configs.

  • input_transform_classes – List of BoTorch input transforms classes. Passed down to the BoTorch Model. Multiple input transforms will be chained together using ChainedInputTransform. This argument is deprecated in favor of model_configs.

  • input_transform_options

    Input transform classes kwargs. The keys are class string names and the values are dictionaries of input transform kwargs. For example, ` input_transform_classes = [Normalize, Round] input_transform_options = {

    ”Normalize”: {“d”: 3}, “Round”: {“integer_indices”: [0], “categorical_features”: {1: 2}},

    For more input options see botorch/models/transforms/input.py. This argument is deprecated in favor of model_configs.

  • covar_module_class – Covariance module class. This gets initialized after parsing the covar_module_options in covar_module_argparse, and gets passed to the model constructor as covar_module. This argument is deprecated in favor of model_configs.

  • covar_module_options – Covariance module kwargs. This argument is deprecated in favor of model_configs.

  • likelihoodLikelihood class. This gets initialized with likelihood_options and gets passed to the model constructor. This argument is deprecated in favor of model_configs.

  • likelihood_options – Likelihood options. This argument is deprecated in favor of model_configs.

  • allow_batched_models – Set to true to fit the models in a batch if supported. Set to false to fit individual models to each metric in a loop.

  • refit_on_cv – Whether to refit the model on the cross-validation folds.

  • metric_to_best_model_config – Dictionary mapping a tuple of metric names to the best model config. This is only used by BotorchModel.cross_validate and for logging what model was used.

property Xs: list[Tensor]
best_in_sample_point(search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig, options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) tuple[Tensor, float][source]

Finds the best observed point and the corresponding observed outcome values.

best_out_of_sample_point(search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig, options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) tuple[Tensor, Tensor][source]

Finds the best predicted point and the corresponding value of the appropriate best point acquisition function.

Parameters:
  • search_space_digest – A SearchSpaceDigest.

  • torch_opt_config – A TorchOptConfig; none-None fixed_features is not supported.

  • options – Optional. If present, seed_inner (default None) and qmc (default True) will be parsed from options; any other keys will be ignored.

Returns:

A two-tuple (candidate, acqf_value), where candidate is a 1d Tensor of the best predicted point and acqf_value is a scalar (0d) Tensor of the acquisition function value at the best point.

clone_reset() Surrogate[source]
compute_diagnostics() dict[str, Any][source]

Computes model diagnostics like cross-validation measure of fit, etc.

cross_validate(dataset: SupervisedDataset, model_config: ModelConfig, default_botorch_model_class: type[Model], search_space_digest: SearchSpaceDigest, state_dict: OrderedDict[str, Tensor] | None = None) float[source]

Cross-validation for a single outcome.

Parameters:
  • dataset – Training data for the model (for one outcome for the default Surrogate, with the exception of batched multi-output case, where training data is formatted with just one X and concatenated Ys).

  • model_config – The model_config.

  • default_botorch_model_class – The default Model class to be used as the underlying BoTorch model, if the model_config does not specify one.

  • search_space_digest – Search space digest used to set up model arguments.

  • state_dict – Optional state dict to load.

Returns:

The eval criterion value for the given model config.

property device: device
property dtype: dtype
fit(datasets: Sequence[SupervisedDataset], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None, state_dict: OrderedDict[str, Tensor] | None = None, refit: bool = True) None[source]

Fits the underlying BoTorch Model to m outcomes.

NOTE: state_dict and refit keyword arguments control how the undelying BoTorch Model will be fit: whether its parameters will be reoptimized and whether it will be warm-started from a given state.

There are three possibilities:

  • fit(state_dict=None): fit model from scratch (optimize model parameters and set its training data used for inference),

  • fit(state_dict=some_state_dict, refit=True): warm-start refit with a state dict of parameters (still re-optimize model parameters and set the training data),

  • fit(state_dict=some_state_dict, refit=False): load model parameters without refitting, but set new training data (used in cross-validation, for example).

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome), to be passed to Model.construct_inputs in BoTorch.

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in the datasets.

  • candidate_metadata – Model-produced metadata for candidates, in the order corresponding to the Xs.

  • state_dict – Optional state dict to load.

  • refit – Whether to re-optimize model parameters.

property model: Model
model_selection(dataset: SupervisedDataset, model_configs: list[ModelConfig], default_botorch_model_class: type[Model], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None) tuple[Model, ModelConfig][source]

Perform model selection over a list of model configs.

This selects the best botorch Model across the provided model configs based on the SurrogateSpec’s eval_criteria. The eval_criteria is computed using LOOCV on the provided dataset. The best model config is saved in self.metric_to_best_model_config for future use (e.g. for using cross- validation at the Modelbridge level).

Parameters:
  • dataset – Training data for the model

  • model_configs – The model_configs.

  • default_botorch_model_class

    The default Model class to be used as the default, if no botorch_model_class is specified in the

    model_config.

  • search_space_digest – Search space digest.

  • candidate_metadata – Model-produced metadata for candidates.

Returns:

  • The best model according to the eval_criterion.

  • The ModelConfig for the best model.

Return type:

A two element tuple containing

property outcomes: list[str]
pareto_frontier() tuple[Tensor, Tensor][source]

For multi-objective optimization, retrieve Pareto frontier instead of best point.

Returns: A two-tuple of:
  • tensor of points in the feature space,

  • tensor of corresponding (multiple) outcomes.

predict(X: Tensor, use_posterior_predictive: bool = False) tuple[Tensor, Tensor][source]

Predicts outcomes given an input tensor.

Parameters:
  • X – A n x d tensor of input parameters.

  • use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).

Returns:

The predicted posterior mean as an n x o-dim tensor. Tensor: The predicted posterior covariance as a n x o x o-dim tensor.

Return type:

Tensor

property training_data: list[SupervisedDataset]
class ax.models.torch.botorch_modular.surrogate.SurrogateSpec(botorch_model_class: type[~botorch.models.model.Model] | None = None, botorch_model_kwargs: dict[str, ~typing.Any] = <factory>, mll_class: type[~gpytorch.mlls.marginal_log_likelihood.MarginalLogLikelihood] = <class 'gpytorch.mlls.exact_marginal_log_likelihood.ExactMarginalLogLikelihood'>, mll_kwargs: dict[str, ~typing.Any] = <factory>, covar_module_class: type[~gpytorch.kernels.kernel.Kernel] | None = None, covar_module_kwargs: dict[str, ~typing.Any] | None = None, likelihood_class: type[~gpytorch.likelihoods.likelihood.Likelihood] | None = None, likelihood_kwargs: dict[str, ~typing.Any] | None = None, input_transform_classes: list[type[~botorch.models.transforms.input.InputTransform]] | None = None, input_transform_options: dict[str, dict[str, ~typing.Any]] | None = None, outcome_transform_classes: list[type[~botorch.models.transforms.outcome.OutcomeTransform]] | None = None, outcome_transform_options: dict[str, dict[str, ~typing.Any]] | None = None, allow_batched_models: bool = True, model_configs: list[~ax.models.torch.botorch_modular.utils.ModelConfig] = <factory>, metric_to_model_configs: dict[str, list[~ax.models.torch.botorch_modular.utils.ModelConfig]] = <factory>, eval_criterion: str = 'Rank correlation', outcomes: list[str] = <factory>, use_posterior_predictive: bool = False)[source]

Bases: object

Fields in the SurrogateSpec dataclass correspond to arguments in Surrogate.__init__, except for outcomes which is used to specify which outcomes the Surrogate is responsible for modeling. When BotorchModel.fit is called, these fields will be used to construct the requisite Surrogate objects. If outcomes is left empty then no outcomes will be fit to the Surrogate.

Parameters:
  • botorch_model_classModel class to be used as the underlying BoTorch model. If None is provided a model class will be selected (either one for all outcomes or a ModelList with separate models for each outcome) will be selected automatically based off the datasets at construct time. This argument is deprecated in favor of model_configs.

  • model_options – Dictionary of options / kwargs for the BoTorch Model constructed during Surrogate.fit. Note that the corresponding attribute will later be updated to include any additional kwargs passed into BoTorchModel.fit. This argument is deprecated in favor of model_configs.

  • mll_classMarginalLogLikelihood class to use for model-fitting. This argument is deprecated in favor of model_configs.

  • mll_options – Dictionary of options / kwargs for the MLL. This argument is deprecated in favor of model_configs.

  • outcome_transform_classes – List of BoTorch outcome transforms classes. Passed down to the BoTorch Model. Multiple outcome transforms can be chained together using ChainedOutcomeTransform. This argument is deprecated in favor of model_configs.

  • outcome_transform_options

    Outcome transform classes kwargs. The keys are class string names and the values are dictionaries of outcome transform kwargs. For example, ` outcome_transform_classes = [Standardize] outcome_transform_options = {

    ”Standardize”: {“m”: 1},

    ` For more options see botorch/models/transforms/outcome.py. This argument

    is deprecated in favor of model_configs.

  • input_transform_classes – List of BoTorch input transforms classes. Passed down to the BoTorch Model. Multiple input transforms will be chained together using ChainedInputTransform. This argument is deprecated in favor of model_configs.

  • input_transform_options

    Input transform classes kwargs. The keys are class string names and the values are dictionaries of input transform kwargs. For example, ` input_transform_classes = [Normalize, Round] input_transform_options = {

    ”Normalize”: {“d”: 3}, “Round”: {“integer_indices”: [0], “categorical_features”: {1: 2}},

    For more input options see botorch/models/transforms/input.py. This argument is deprecated in favor of model_configs.

  • covar_module_class – Covariance module class. This gets initialized after parsing the covar_module_options in covar_module_argparse, and gets passed to the model constructor as covar_module. This argument is deprecated in favor of model_configs.

  • covar_module_options – Covariance module kwargs. This argument is deprecated in favor of model_configs.

  • likelihoodLikelihood class. This gets initialized with likelihood_options and gets passed to the model constructor. This argument is deprecated in favor of model_configs.

  • likelihood_options – Likelihood options. This argument is deprecated in favor of model_configs.

  • model_configs – List of model configs. Each model config is a specification of a model. These should be used in favor of the above deprecated arguments.

  • metric_to_model_configs – Dictionary mapping metric names to a list of model configs for that metric.

  • eval_criterion – The name of the evaluation criteria to use. These are defined in ax.utils.stats.model_fit_stats. Defaults to rank correlation.

  • outcomes – List of outcomes names.

  • use_posterior_predictive – Whether to use posterior predictive in cross-validation.

allow_batched_models: bool = True
botorch_model_class: type[Model] | None = None
botorch_model_kwargs: dict[str, Any]
covar_module_class: type[Kernel] | None = None
covar_module_kwargs: dict[str, Any] | None = None
eval_criterion: str = 'Rank correlation'
input_transform_classes: list[type[InputTransform]] | None = None
input_transform_options: dict[str, dict[str, Any]] | None = None
likelihood_class: type[Likelihood] | None = None
likelihood_kwargs: dict[str, Any] | None = None
metric_to_model_configs: dict[str, list[ModelConfig]]
mll_class

alias of ExactMarginalLogLikelihood

mll_kwargs: dict[str, Any]
model_configs: list[ModelConfig]
outcome_transform_classes: list[type[OutcomeTransform]] | None = None
outcome_transform_options: dict[str, dict[str, Any]] | None = None
outcomes: list[str]
use_posterior_predictive: bool = False
ax.models.torch.botorch_modular.surrogate.get_model_config_from_deprecated_args(botorch_model_class: type[Model] | None, model_options: dict[str, Any] | None, mll_class: type[MarginalLogLikelihood] | None, mll_options: dict[str, Any] | None, outcome_transform_classes: list[type[OutcomeTransform]] | None, outcome_transform_options: dict[str, dict[str, Any]] | None, input_transform_classes: list[type[InputTransform]] | None, input_transform_options: dict[str, dict[str, Any]] | None, covar_module_class: type[Kernel] | None, covar_module_options: dict[str, Any] | None, likelihood_class: type[Likelihood] | None, likelihood_options: dict[str, Any] | None) ModelConfig[source]

Construct a ModelConfig from deprecated arguments.

ax.models.torch.botorch_modular.utils module

class ax.models.torch.botorch_modular.utils.ModelConfig(botorch_model_class: type[~botorch.models.model.Model] | None = None, model_options: dict[str, ~typing.Any] = <factory>, mll_class: type[~gpytorch.mlls.marginal_log_likelihood.MarginalLogLikelihood] = <class 'gpytorch.mlls.exact_marginal_log_likelihood.ExactMarginalLogLikelihood'>, mll_options: dict[str, ~typing.Any] = <factory>, input_transform_classes: list[type[~botorch.models.transforms.input.InputTransform]] | None = None, input_transform_options: dict[str, dict[str, ~typing.Any]] | None = <factory>, outcome_transform_classes: list[type[~botorch.models.transforms.outcome.OutcomeTransform]] | None = None, outcome_transform_options: dict[str, dict[str, ~typing.Any]] = <factory>, covar_module_class: type[~gpytorch.kernels.kernel.Kernel] | None = None, covar_module_options: dict[str, ~typing.Any] = <factory>, likelihood_class: type[~gpytorch.likelihoods.likelihood.Likelihood] | None = None, likelihood_options: dict[str, ~typing.Any] = <factory>, name: str | None = None)[source]

Bases: object

Configuration for the BoTorch Model used in Surrogate.

Parameters:
  • botorch_model_classModel class to be used as the underlying BoTorch model. If None is provided a model class will be selected (either one for all outcomes or a ModelList with separate models for each outcome) will be selected automatically based off the datasets at construct time.

  • model_options – Dictionary of options / kwargs for the BoTorch Model constructed during Surrogate.fit. Note that the corresponding attribute will later be updated to include any additional kwargs passed into BoTorchModel.fit.

  • mll_classMarginalLogLikelihood class to use for model-fitting. This argument is deprecated in favor of model_configs.

  • mll_options – Dictionary of options / kwargs for the MLL.

  • outcome_transform_classes – List of BoTorch outcome transforms classes. Passed down to the BoTorch Model. Multiple outcome transforms can be chained together using ChainedOutcomeTransform.

  • outcome_transform_options

    Outcome transform classes kwargs. The keys are class string names and the values are dictionaries of outcome transform kwargs. For example, ` outcome_transform_classes = [Standardize] outcome_transform_options = {

    ”Standardize”: {“m”: 1},

    ` For more options see botorch/models/transforms/outcome.py.

  • input_transform_classes – List of BoTorch input transforms classes. Passed down to the BoTorch Model. Multiple input transforms will be chained together using ChainedInputTransform.

  • input_transform_options

    Input transform classes kwargs. The keys are class string names and the values are dictionaries of input transform kwargs. For example, ` input_transform_classes = [Normalize, Round] input_transform_options = {

    ”Normalize”: {“d”: 3}, “Round”: {“integer_indices”: [0], “categorical_features”: {1: 2}},

    For more input options see botorch/models/transforms/input.py.

  • covar_module_class – Covariance module class. This gets initialized after parsing the covar_module_options in covar_module_argparse, and gets passed to the model constructor as covar_module.

  • covar_module_options – Covariance module kwargs. in favor of model_configs.

  • likelihoodLikelihood class. This gets initialized with likelihood_options and gets passed to the model constructor. This argument is deprecated in favor of model_configs.

  • likelihood_options – Likelihood options.

  • name – Name of the model config. This is used to identify the model config.

botorch_model_class: type[Model] | None = None
covar_module_class: type[Kernel] | None = None
covar_module_options: dict[str, Any]
input_transform_classes: list[type[InputTransform]] | None = None
input_transform_options: dict[str, dict[str, Any]] | None
likelihood_class: type[Likelihood] | None = None
likelihood_options: dict[str, Any]
mll_class

alias of ExactMarginalLogLikelihood

mll_options: dict[str, Any]
model_options: dict[str, Any]
name: str | None = None
outcome_transform_classes: list[type[OutcomeTransform]] | None = None
outcome_transform_options: dict[str, dict[str, Any]]
ax.models.torch.botorch_modular.utils.check_outcome_dataset_match(outcome_names: Sequence[str], datasets: Sequence[SupervisedDataset], exact_match: bool) None[source]

Check that the given outcome names match those of datasets.

Based on exact_match we either require that outcome names are a subset of all outcomes or require the them to be the same.

Also checks that there are no duplicates in outcome names.

Parameters:
  • outcome_names – A list of outcome names.

  • datasets – A list of SupervisedDataset objects.

  • exact_match – If True, outcome_names must be the same as the union of outcome names of the datasets. Otherwise, we check that the outcome_names are a subset of all outcomes.

Raises:

ValueError – If there is no match.

ax.models.torch.botorch_modular.utils.choose_botorch_acqf_class(torch_opt_config: TorchOptConfig) type[AcquisitionFunction][source]

Chooses a BoTorch AcquisitionFunction class.

Current logic relies on TorchOptConfig.is_moo field to determine whether to use qLogNEHVI (for MOO) or qLogNEI for (SOO).

ax.models.torch.botorch_modular.utils.choose_model_class(datasets: Sequence[SupervisedDataset], search_space_digest: SearchSpaceDigest) type[Model][source]

Chooses a BoTorch Model using the given data (currently just Yvars) and its properties (information about task and fidelity features).

Parameters:
  • Yvars – List of tensors, each representing observation noise for a given outcome, where outcomes are in the same order as in Xs.

  • task_features – List of columns of X that are tasks.

  • fidelity_features – List of columns of X that are fidelity parameters.

Returns:

A BoTorch Model class.

ax.models.torch.botorch_modular.utils.construct_acquisition_and_optimizer_options(acqf_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None], model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) tuple[dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None], dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None]][source]

Extract acquisition and optimizer options from model_gen_options.

ax.models.torch.botorch_modular.utils.convert_to_block_design(datasets: Sequence[SupervisedDataset], force: bool = False) list[SupervisedDataset][source]
ax.models.torch.botorch_modular.utils.fit_botorch_model(model: Model, mll_class: type[MarginalLogLikelihood], mll_options: dict[str, Any] | None = None) None[source]

Fit a BoTorch model.

ax.models.torch.botorch_modular.utils.get_subset_datasets(datasets: Sequence[SupervisedDataset], subset_outcome_names: Sequence[str]) list[SupervisedDataset][source]

Get the list of datasets corresponding to the given subset of outcome names. This is used to separate out datasets that are used by one surrogate.

Parameters:
  • datasets – A list of SupervisedDataset objects.

  • subset_outcome_names – A list of outcome names to get datasets for.

Returns:

A list of SupervisedDataset objects corresponding to the given subset of outcome names.

ax.models.torch.botorch_modular.utils.subset_state_dict(state_dict: OrderedDict[str, Tensor], submodel_index: int) OrderedDict[str, Tensor][source]

Get the state dict for a submodel from the state dict of a model list.

Parameters:
  • state_dict – A state dict.

  • submodel_index – The index of the submodel to extract.

Returns:

The state dict for the submodel.

ax.models.torch.botorch_modular.utils.use_model_list(datasets: Sequence[SupervisedDataset], botorch_model_class: type[Model], model_configs: list[ModelConfig] | None = None, metric_to_model_configs: dict[str, list[ModelConfig]] | None = None, allow_batched_models: bool = True) bool[source]

ax.models.torch.botorch_modular.kernels module

class ax.models.torch.botorch_modular.kernels.ScaleMaternKernel(ard_num_dims: int | None = None, batch_shape: Size | None = None, lengthscale_prior: Prior | None = None, outputscale_prior: Prior | None = None, lengthscale_constraint: Interval | None = None, outputscale_constraint: Interval | None = None, **kwargs: Any)[source]

Bases: ScaleKernel

class ax.models.torch.botorch_modular.kernels.TemporalKernel(dim: int, temporal_features: list[int], matern_ard_num_dims: int | None = None, batch_shape: Size | None = None, lengthscale_prior: Prior | None = None, temporal_lengthscale_prior: Prior | None = None, period_length_prior: Prior | None = None, fixed_period_length: float | None = None, outputscale_prior: Prior | None = None, lengthscale_constraint: Interval | None = None, outputscale_constraint: Interval | None = None, temporal_lengthscale_constraint: Interval | None = None, period_length_constraint: Interval | None = None, **kwargs: Any)[source]

Bases: ScaleKernel

A product kernel of a periodic kernel and a Matern kernel.

The periodic kernel computes the similarity between temporal features such as the time of day.

The Matern kernel computes the similarity between the tunable parameters.

ax.models.torch.botorch_modular.input_constructors.covar_modules module

ax.models.torch.botorch_modular.input_constructors.input_transforms module

ax.models.torch.botorch_modular.input_constructors.outcome_transform module

ax.models.torch.cbo_lcea module

class ax.models.torch.cbo_lcea.LCEABO(decomposition: dict[str, list[str]], cat_feature_dict: dict | None = None, embs_feature_dict: dict | None = None, context_weight_dict: dict | None = None, embs_dim_list: list[int] | None = None, gp_model_args: dict[str, Any] | None = None)[source]

Bases: BotorchModel

Does Bayesian optimization with Latent Context Embedding Additive (LCE-A) GP. The parameter space decomposition must be provided.

Parameters:
  • decomposition – Keys are context names. Values are the lists of parameter names belong to the context, e.g. {‘context1’: [‘p1_c1’, ‘p2_c1’],’context2’: [‘p1_c2’, ‘p2_c2’]}.

  • gp_model_args – Dictionary of kwargs to pass to GP model training. - train_embedding: Boolen. If true, we will train context embedding; otherwise, we use pre-trained embeddings from embds_feature_dict only. Default is True.

best_point(search_space_digest: SearchSpaceDigest, torch_opt_config: TorchOptConfig) Tensor | None[source]

Identify the current best point, satisfying the constraints in the same format as to gen.

Return None if no such point can be identified.

Parameters:
  • search_space_digest – A SearchSpaceDigest object containing metadata about the search space (e.g. bounds, parameter types).

  • torch_opt_config – A TorchOptConfig object containing optimization arguments (e.g., objective weights, constraints).

Returns:

d-tensor of the best point.

fit(datasets: list[SupervisedDataset], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None) None[source]

Fit model to m outcomes.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in the datasets.

  • candidate_metadata – Model-produced metadata for candidates, in the order corresponding to the Xs.

get_and_fit_model(Xs: list[Tensor], Ys: list[Tensor], Yvars: list[Tensor], task_features: list[int], fidelity_features: list[int], metric_names: list[str], state_dict: dict[str, Tensor] | None = None, fidelity_model_id: int | None = None, **kwargs: Any) GPyTorchModel[source]

Get a fitted LCEAGP model for each outcome. :param Xs: X for each outcome. :param Ys: Y for each outcome. :param Yvars: Noise variance of Y for each outcome.

Returns: Fitted LCEAGP model.

property model: LCEAGP | ModelListGP
ax.models.torch.cbo_lcea.get_map_model(train_X: Tensor, train_Y: Tensor, train_Yvar: Tensor, decomposition: dict[str, list[int]], train_embedding: bool = True, cat_feature_dict: dict | None = None, embs_feature_dict: dict | None = None, embs_dim_list: list[int] | None = None, context_weight_dict: dict | None = None) tuple[LCEAGP, ExactMarginalLogLikelihood][source]

Obtain MAP fitting of Latent Context Embedding Additive (LCE-A) GP.

ax.models.torch.cbo_lcem module

class ax.models.torch.cbo_lcem.LCEMBO(context_cat_feature: Tensor | None = None, context_emb_feature: Tensor | None = None, embs_dim_list: list[int] | None = None)[source]

Bases: BotorchModel

Does Bayesian optimization with LCE-M GP.

get_and_fit_model(Xs: list[Tensor], Ys: list[Tensor], Yvars: list[Tensor], task_features: list[int], fidelity_features: list[int], metric_names: list[str], state_dict: dict[str, Tensor] | None = None, fidelity_model_id: int | None = None, **kwargs: Any) ModelListGP[source]

Get a fitted multi-task contextual GP model for each outcome. :param Xs: List of X data, one tensor per outcome. :param Ys: List of Y data, one tensor per outcome. :param Yvars: List of Noise variance of Yvar data, one tensor per outcome. :param task_features: List of columns of X that are tasks.

Returns: ModeListGP that each model is a fitted LCEM GP model.

ax.models.torch.cbo_sac module

class ax.models.torch.cbo_sac.SACBO(decomposition: dict[str, list[str]])[source]

Bases: BotorchModel

Does Bayesian optimization with structural additive contextual GP (SACGP). The parameter space decomposition must be provided.

Parameters:

decomposition – Keys are context names. Values are the lists of parameter names belong to the context, e.g. {‘context1’: [‘p1_c1’, ‘p2_c1’],’context2’: [‘p1_c2’, ‘p2_c2’]}.

fit(datasets: list[SupervisedDataset], search_space_digest: SearchSpaceDigest, candidate_metadata: list[list[dict[str, Any] | None]] | None = None) None[source]

Fit model to m outcomes.

Parameters:
  • datasets – A list of SupervisedDataset containers, each corresponding to the data of one metric (outcome).

  • search_space_digest – A SearchSpaceDigest object containing metadata on the features in the datasets.

  • candidate_metadata – Model-produced metadata for candidates, in the order corresponding to the Xs.

get_and_fit_model(Xs: list[Tensor], Ys: list[Tensor], Yvars: list[Tensor], task_features: list[int], fidelity_features: list[int], metric_names: list[str], state_dict: dict[str, Tensor] | None = None, fidelity_model_id: int | None = None, **kwargs: Any) GPyTorchModel[source]

Get a fitted StructuralAdditiveContextualGP model for each outcome. :param Xs: X for each outcome. :param Ys: Y for each outcome. :param Yvars: Noise variance of Y for each outcome.

Returns: Fitted StructuralAdditiveContextualGP model.

ax.models.torch.cbo_sac.generate_model_space_decomposition(decomposition: dict[str, list[str]], feature_names: list[str]) dict[str, list[int]][source]

ax.models.torch.fully_bayesian module

ax.models.torch.fully_bayesian_model_utils module

ax.models.torch.utils module

class ax.models.torch.utils.SubsetModelData(model: botorch.models.model.Model, objective_weights: torch.Tensor, outcome_constraints: tuple[torch.Tensor, torch.Tensor] | None, objective_thresholds: torch.Tensor | None, indices: torch.Tensor)[source]

Bases: object

indices: Tensor
model: Model
objective_thresholds: Tensor | None
objective_weights: Tensor
outcome_constraints: tuple[Tensor, Tensor] | None
ax.models.torch.utils.get_botorch_objective_and_transform(botorch_acqf_class: type[AcquisitionFunction], model: Model, objective_weights: Tensor, outcome_constraints: tuple[Tensor, Tensor] | None = None, X_observed: Tensor | None = None, risk_measure: RiskMeasureMCObjective | None = None) tuple[MCAcquisitionObjective | None, PosteriorTransform | None][source]

Constructs a BoTorch AcquisitionObjective object.

Parameters:
  • botorch_acqf_class – The acquisition function class the objective and posterior transform are to be used with. This is mainly used to determine whether to construct a multi-output or a single-output objective.

  • model – A BoTorch Model.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)

  • X_observed – Observed points that are feasible and appear in the objective or the constraints. None if there are no such points.

  • risk_measure – An optional risk measure for robust optimization.

Returns:

A two-tuple containing (optionally) an MCAcquisitionObjective and (optionally) a PosteriorTransform.

ax.models.torch.utils.normalize_indices(indices: list[int], d: int) list[int][source]

Normalize a list of indices to ensure that they are positive.

Parameters:
  • indices – A list of indices (may contain negative indices for indexing “from the back”).

  • d – The dimension of the tensor to index.

Returns:

A normalized list of indices such that each index is between 0 and d-1.

ax.models.torch.utils.pick_best_out_of_sample_point_acqf_class(outcome_constraints: tuple[Tensor, Tensor] | None = None, mc_samples: int = 512, qmc: bool = True, seed_inner: int | None = None, risk_measure: RiskMeasureMCObjective | None = None) tuple[type[AcquisitionFunction], dict[str, Any]][source]
ax.models.torch.utils.predict_from_model(model: Model, X: Tensor, use_posterior_predictive: bool = False) tuple[Tensor, Tensor][source]

Predicts outcomes given a model and input tensor.

For a GaussianMixturePosterior we currently use a Gaussian approximation where we compute the mean and variance of the Gaussian mixture. This should ideally be changed to compute quantiles instead when Ax supports non-Gaussian distributions.

Parameters:
  • model – A botorch Model.

  • X – A n x d tensor of input parameters.

  • use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).

Returns:

The predicted posterior mean as an n x o-dim tensor. Tensor: The predicted posterior covariance as a n x o x o-dim tensor.

Return type:

Tensor

ax.models.torch.utils.randomize_objective_weights(objective_weights: Tensor, random_scalarization_distribution: str = 'simplex') Tensor[source]

Generate a random weighting based on acquisition function settings.

Parameters:
  • objective_weights – Base weights to multiply by random values.

  • random_scalarization_distribution – “simplex” or “hypersphere”.

Returns:

A normalized list of indices such that each index is between 0 and d-1.

ax.models.torch.utils.subset_model(model: Model, objective_weights: Tensor, outcome_constraints: tuple[Tensor, Tensor] | None = None, objective_thresholds: Tensor | None = None) SubsetModelData[source]

Subset a botorch model to the outputs used in the optimization.

Parameters:
  • model – A BoTorch Model. If the model does not implement the subset_outputs method, this function is a null-op and returns the input arguments.

  • objective_weights – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.

  • objective_thresholds – The m-dim tensor of objective thresholds. There is one for each modeled metric.

  • outcome_constraints – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)

Returns:

A SubsetModelData dataclass containing the model, objective_weights, outcome_constraints, objective thresholds, all subset to only those outputs that appear in either the objective weights or the outcome constraints, along with the indices of the outputs.

ax.models.torch.utils.tensor_callable_to_array_callable(tensor_func: Callable[[Tensor], Tensor], device: device) Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[_ScalarType_co]]][source]

transfer a tensor callable to an array callable