ax.models¶
Base Models¶
ax.models.base¶
-
class
ax.models.base.
Model
[source]¶ Bases:
object
Base class for an Ax model.
Note: the core methods each model has: fit, predict, gen, cross_validate, and best_point are not present in this base class, because the signatures for those methods vary based on the type of the model. This class only contains the methods that all models have in common and for which they all share the signature.
ax.models.discrete_base module¶
-
class
ax.models.discrete_base.
DiscreteModel
[source]¶ Bases:
ax.models.base.Model
This class specifies the interface for a model based on discrete parameters.
These methods should be implemented to have access to all of the features of Ax.
-
best_point
(n, parameter_values, objective_weights, outcome_constraints=None, fixed_features=None, pending_observations=None, model_gen_options=None)[source]¶ Obtains the point that has the best value according to the model prediction and its model predictions.
-
cross_validate
(Xs_train, Ys_train, Yvars_train, X_test)[source]¶ Do cross validation with the given training and test sets.
Training set is given in the same format as to fit. Test set is given in the same format as to predict.
- Parameters
Xs_train (
List
[List
[List
[Union
[str
,bool
,float
,int
,None
]]]]) – A list of m lists X of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome.Ys_train (
List
[List
[float
]]) – The corresponding list of m lists Y, each of length k_i, for each outcome.Yvars_train (
List
[List
[float
]]) – The variances of each entry in Ys, same shape.X_test (
List
[List
[Union
[str
,bool
,float
,int
,None
]]]) – List of the j parameterizations at which to make predictions.
- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(j x m) array of outcome predictions at X.
(j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
fit
(Xs, Ys, Yvars, parameter_values, outcome_names)[source]¶ Fit model to m outcomes.
- Parameters
Xs (
List
[List
[List
[Union
[str
,bool
,float
,int
,None
]]]]) – A list of m lists X of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome.Ys (
List
[List
[float
]]) – The corresponding list of m lists Y, each of length k_i, for each outcome.Yvars (
List
[List
[float
]]) – The variances of each entry in Ys, same shape.parameter_values (
List
[List
[Union
[str
,bool
,float
,int
,None
]]]) – A list of possible values for each parameter.
- Return type
None
-
gen
(n, parameter_values, objective_weights, outcome_constraints=None, fixed_features=None, pending_observations=None, model_gen_options=None)[source]¶ Generate new candidates.
- Parameters
n (
int
) – Number of candidates to generate.parameter_values (
List
[List
[Union
[str
,bool
,float
,int
,None
]]]) – A list of possible values for each parameter.objective_weights (
Optional
[ndarray
]) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.fixed_features (
Optional
[Dict
[int
,Union
[str
,bool
,float
,int
,None
]]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.pending_observations (
Optional
[List
[List
[List
[Union
[str
,bool
,float
,int
,None
]]]]]) – A list of m lists of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome i.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.
- Return type
Tuple
[List
[List
[Union
[str
,bool
,float
,int
,None
]]],List
[float
],Dict
[str
,Any
]]- Returns
2-element tuple containing
List of n generated points, where each point is represented by a list of parameter values.
List of weights for each of the n points.
-
predict
(X)[source]¶ Predict
- Parameters
X (
List
[List
[Union
[str
,bool
,float
,int
,None
]]]) – List of the j parameterizations at which to make predictions.- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(j x m) array of outcome predictions at X.
(j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
ax.models.model_utils module¶
-
ax.models.model_utils.
add_fixed_features
(tunable_points, d, fixed_features, tunable_feature_indices)[source]¶ Add fixed features to points in tunable space.
- Parameters
tunable_points (
ndarray
) – Points in tunable space.d (
int
) – Dimension of parameter space.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.tunable_feature_indices (
ndarray
) – Parameter indices (in d) which are tunable.
- Returns
Points in the full d-dimensional space, defined by bounds.
- Return type
points
-
ax.models.model_utils.
as_array
(x)[source]¶ Convert every item in a tuple of tensors/arrays into an array.
-
ax.models.model_utils.
best_in_sample_point
(Xs, model, bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, options=None)[source]¶ Select the best point that has been observed.
Implements two approaches to selecting the best point.
For both approaches, only points that satisfy parameter space constraints (bounds, linear_constraints, fixed_features) will be returned. Points must also be observed for all objective and constraint outcomes. Returned points may violate outcome constraints, depending on the method below.
1: Select the point that maximizes the expected utility (objective_weights^T posterior_objective_means - baseline) * Prob(feasible) Here baseline should be selected so that at least one point has positive utility. It can be specified in the options dict, otherwise min (objective_weights^T posterior_objective_means) will be used, where the min is over observed points.
2: Select the best-objective point that is feasible with at least probability p.
The following quantities may be specified in the options dict:
best_point_method: ‘max_utility’ (default) or ‘feasible_threshold’ to select between the two approaches described above.
utility_baseline: Value for the baseline used in max_utility approach. If not provided, defaults to min objective value.
probability_threshold: Threshold for the feasible_threshold approach. Defaults to p=0.95.
feasibility_mc_samples: Number of MC samples used for estimating the probability of feasibility (defaults 10k).
- Parameters
Xs (
Union
[List
[Tensor
],List
[ndarray
]]) – Training data for the points, among which to select the best.model (
Union
[NumpyModel
,TorchModel
]) – Numpy or Torch model.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each feature.objective_weights (
Union
[Tensor
,ndarray
,None
]) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Union
[Tensor
,ndarray
],Union
[Tensor
,ndarray
]]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[Union
[Tensor
,ndarray
],Union
[Tensor
,ndarray
]]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value in the best point.options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary with settings described above.
- Returns
d-array of the best point,
utility at the best point.
- Return type
A two-element tuple or None if no feasible point exist. In tuple
-
ax.models.model_utils.
best_observed_point
(model, bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, options=None)[source]¶ Select the best point that has been observed.
Implements two approaches to selecting the best point.
For both approaches, only points that satisfy parameter space constraints (bounds, linear_constraints, fixed_features) will be returned. Points must also be observed for all objective and constraint outcomes. Returned points may violate outcome constraints, depending on the method below.
1: Select the point that maximizes the expected utility (objective_weights^T posterior_objective_means - baseline) * Prob(feasible) Here baseline should be selected so that at least one point has positive utility. It can be specified in the options dict, otherwise min (objective_weights^T posterior_objective_means) will be used, where the min is over observed points.
2: Select the best-objective point that is feasible with at least probability p.
The following quantities may be specified in the options dict:
best_point_method: ‘max_utility’ (default) or ‘feasible_threshold’ to select between the two approaches described above.
utility_baseline: Value for the baseline used in max_utility approach. If not provided, defaults to min objective value.
probability_threshold: Threshold for the feasible_threshold approach. Defaults to p=0.95.
feasibility_mc_samples: Number of MC samples used for estimating the probability of feasibility (defaults 10k).
- Parameters
model (
Union
[NumpyModel
,TorchModel
]) – Numpy or Torch model.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each feature.objective_weights (
Union
[Tensor
,ndarray
,None
]) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Union
[Tensor
,ndarray
],Union
[Tensor
,ndarray
]]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[Union
[Tensor
,ndarray
],Union
[Tensor
,ndarray
]]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value in the best point.options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary with settings described above.
- Return type
Union
[Tensor
,ndarray
,None
]- Returns
A d-array of the best point, or None if no feasible point exists.
-
ax.models.model_utils.
check_duplicate
(point, points)[source]¶ Check if a point exists in another array.
- Parameters
point (
ndarray
) – Newly generated point to check.points (
ndarray
) – Points previously generated.
- Return type
- Returns
True if the point is contained in points, else False
-
ax.models.model_utils.
check_param_constraints
(linear_constraints, point)[source]¶ Check if a point satisfies parameter constraints.
- Parameters
linear_constraints (
Tuple
[ndarray
,ndarray
]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.point (
ndarray
) – A candidate point in d-dimensional space, as a (1 x d) matrix.
- Return type
- Returns
2-element tuple containing
Flag that is True if all constraints are satisfied by the point.
Indices of constraints which are violated by the point.
-
ax.models.model_utils.
filter_constraints_and_fixed_features
(X, bounds, linear_constraints=None, fixed_features=None)[source]¶ Filter points to those that satisfy bounds, linear_constraints, and fixed_features.
- Parameters
X (
Union
[Tensor
,ndarray
]) – An tensor or array of points.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each feature.linear_constraints (
Optional
[Tuple
[Union
[Tensor
,ndarray
],Union
[Tensor
,ndarray
]]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value in the best point.
- Return type
Union
[Tensor
,ndarray
]- Returns
Feasible points.
-
ax.models.model_utils.
get_observed
(Xs, objective_weights, outcome_constraints=None)[source]¶ Filter points to those that are observed for objective outcomes and outcomes that show up in outcome_constraints (if there are any).
- Parameters
Xs (
Union
[List
[Tensor
],List
[ndarray
]]) – A list of m (k_i x d) feature matrices X. Number of rows k_i can vary from i=1,…,m.objective_weights (
Union
[Tensor
,ndarray
]) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Union
[Tensor
,ndarray
],Union
[Tensor
,ndarray
]]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.
- Return type
Union
[Tensor
,ndarray
]- Returns
Points observed for all objective outcomes and outcome constraints.
-
ax.models.model_utils.
rejection_sample
(gen_unconstrained, n, d, tunable_feature_indices, linear_constraints=None, deduplicate=False, max_draws=None, fixed_features=None, rounding_func=None, existing_points=None)[source]¶ Rejection sample in parameter space.
Models must implement a gen_unconstrained method in order to support rejection sampling via this utility.
-
ax.models.model_utils.
tunable_feature_indices
(bounds, fixed_features=None)[source]¶ Get the feature indices of tunable features.
- Parameters
- Return type
ndarray
- Returns
The indices of tunable features.
ax.models.numpy_base module¶
-
class
ax.models.numpy_base.
NumpyModel
[source]¶ Bases:
ax.models.base.Model
This class specifies the interface for a numpy-based model.
These methods should be implemented to have access to all of the features of Ax.
-
best_point
(bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, model_gen_options=None)[source]¶ Identify the current best point, satisfying the constraints in the same format as to gen.
Return None if no such point can be identified.
- Parameters
bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.objective_weights (
ndarray
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value in the best point.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.
- Return type
Optional
[ndarray
]- Returns
A d-array of the best point.
-
cross_validate
(Xs_train, Ys_train, Yvars_train, X_test)[source]¶ Do cross validation with the given training and test sets.
Training set is given in the same format as to fit. Test set is given in the same format as to predict.
- Parameters
Xs_train (
List
[ndarray
]) – A list of m (k_i x d) feature matrices X. Number of rows k_i can vary from i=1,…,m.Ys_train (
List
[ndarray
]) – The corresponding list of m (k_i x 1) outcome arrays Y, for each outcome.Yvars_train (
List
[ndarray
]) – The variances of each entry in Ys, same shape.X_test (
ndarray
) – (j x d) array of the j points at which to make predictions.
- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(j x m) array of outcome predictions at X.
(j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
fit
(Xs, Ys, Yvars, bounds, task_features, feature_names, metric_names, fidelity_features, candidate_metadata=None)[source]¶ Fit model to m outcomes.
- Parameters
Xs (
List
[ndarray
]) – A list of m (k_i x d) feature matrices X. Number of rows k_i can vary from i=1,…,m.Ys (
List
[ndarray
]) – The corresponding list of m (k_i x 1) outcome arrays Y, for each outcome.Yvars (
List
[ndarray
]) – The variances of each entry in Ys, same shape.bounds (
List
[Tuple
[float
,float
]]) – A list of d (lower, upper) tuples for each column of X.task_features (
List
[int
]) – Columns of X that take integer values and should be treated as task parameters.fidelity_features (
List
[int
]) – Columns of X that should be treated as fidelity parameters.candidate_metadata (
Optional
[List
[List
[Optional
[Dict
[str
,Any
]]]]]) – Model-produced metadata for candidates, in the order corresponding to the Xs.
- Return type
None
-
gen
(n, bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, pending_observations=None, model_gen_options=None, rounding_func=None)[source]¶ Generate new candidates.
- Parameters
n (
int
) – Number of candidates to generate.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.objective_weights (
ndarray
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.pending_observations (
Optional
[List
[ndarray
]]) – A list of m (k_i x d) feature arrays X for m outcomes and k_i pending observations for outcome i.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.rounding_func (
Optional
[Callable
[[ndarray
],ndarray
]]) – A function that rounds an optimization result (xbest) appropriately (i.e., according to round-trip transformations)
- Return type
Tuple
[ndarray
,ndarray
,Dict
[str
,Any
],Optional
[List
[Optional
[Dict
[str
,Any
]]]]]- Returns
4-element tuple containing
(n x d) tensor of generated points.
n-tensor of weights for each point.
Generation metadata
- Dictionary of model-specific metadata for the given
generation candidates
-
predict
(X)[source]¶ Predict
- Parameters
X (
ndarray
) – (j x d) array of the j points at which to make predictions.- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(j x m) array of outcome predictions at X.
(j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
update
(Xs, Ys, Yvars, candidate_metadata=None)[source]¶ Update the model.
Updating the model requires both existing and additional data. The data passed into this method will become the new training data.
- Parameters
Xs (
List
[ndarray
]) – Existing + additional data for the model, in the same format as for fit.Ys (
List
[ndarray
]) – Existing + additional data for the model, in the same format as for fit.Yvars (
List
[ndarray
]) – Existing + additional data for the model, in the same format as for fit.candidate_metadata (
Optional
[List
[List
[Optional
[Dict
[str
,Any
]]]]]) – Model-produced metadata for candidates, in the order corresponding to the Xs.
- Return type
None
-
ax.models.torch_base module¶
-
class
ax.models.torch_base.
TorchModel
[source]¶ Bases:
ax.models.base.Model
This class specifies the interface for a torch-based model.
These methods should be implemented to have access to all of the features of Ax.
-
best_point
(bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, model_gen_options=None, target_fidelities=None)[source]¶ Identify the current best point, satisfying the constraints in the same format as to gen.
Return None if no such point can be identified.
- Parameters
bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.objective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value in the best point.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.target_fidelities (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} of fidelity feature column indices to their respective target fidelities. Used for multi-fidelity optimization.
- Return type
Optional
[Tensor
]- Returns
d-tensor of the best point.
-
cross_validate
(Xs_train, Ys_train, Yvars_train, X_test)[source]¶ Do cross validation with the given training and test sets.
Training set is given in the same format as to fit. Test set is given in the same format as to predict.
- Parameters
Xs_train (
List
[Tensor
]) – A list of m (k_i x d) feature tensors X. Number of rows k_i can vary from i=1,…,m.Ys_train (
List
[Tensor
]) – The corresponding list of m (k_i x 1) outcome tensors Y, for each outcome.Yvars_train (
List
[Tensor
]) – The variances of each entry in Ys, same shape.X_test (
Tensor
) – (j x d) tensor of the j points at which to make predictions.
- Return type
Tuple
[Tensor
,Tensor
]- Returns
2-element tuple containing
(j x m) tensor of outcome predictions at X.
(j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
device
: Optional[torch.device] = None¶
-
dtype
: Optional[torch.dtype] = None¶
-
fit
(Xs, Ys, Yvars, bounds, task_features, feature_names, metric_names, fidelity_features, candidate_metadata=None)[source]¶ Fit model to m outcomes.
- Parameters
Xs (
List
[Tensor
]) – A list of m (k_i x d) feature tensors X. Number of rows k_i can vary from i=1,…,m.Ys (
List
[Tensor
]) – The corresponding list of m (k_i x 1) outcome tensors Y, for each outcome.Yvars (
List
[Tensor
]) – The variances of each entry in Ys, same shape.bounds (
List
[Tuple
[float
,float
]]) – A list of d (lower, upper) tuples for each column of X.task_features (
List
[int
]) – Columns of X that take integer values and should be treated as task parameters.fidelity_features (
List
[int
]) – Columns of X that should be treated as fidelity parameters.candidate_metadata (
Optional
[List
[List
[Optional
[Dict
[str
,Any
]]]]]) – Model-produced metadata for candidates, in the order corresponding to the Xs.
- Return type
None
-
gen
(n, bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, pending_observations=None, model_gen_options=None, rounding_func=None, target_fidelities=None)[source]¶ Generate new candidates.
- Parameters
n (
int
) – Number of candidates to generate.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.objective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.pending_observations (
Optional
[List
[Tensor
]]) – A list of m (k_i x d) feature tensors X for m outcomes and k_i pending observations for outcome i.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.rounding_func (
Optional
[Callable
[[Tensor
],Tensor
]]) – A function that rounds an optimization result appropriately (i.e., according to round-trip transformations).target_fidelities (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} of fidelity feature column indices to their respective target fidelities. Used for multi-fidelity optimization.
- Return type
Tuple
[Tensor
,Tensor
,Dict
[str
,Any
],Optional
[List
[Optional
[Dict
[str
,Any
]]]]]- Returns
4-element tuple containing
(n x d) tensor of generated points.
n-tensor of weights for each point.
Generation metadata
- Dictionary of model-specific metadata for the given
generation candidates
-
predict
(X)[source]¶ Predict
- Parameters
X (
Tensor
) – (j x d) tensor of the j points at which to make predictions.- Return type
Tuple
[Tensor
,Tensor
]- Returns
2-element tuple containing
(j x m) tensor of outcome predictions at X.
(j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
update
(Xs, Ys, Yvars, candidate_metadata=None)[source]¶ Update the model.
Updating the model requires both existing and additional data. The data passed into this method will become the new training data.
- Parameters
Xs (
List
[Tensor
]) – Existing + additional data for the model, in the same format as for fit.Ys (
List
[Tensor
]) – Existing + additional data for the model, in the same format as for fit.Yvars (
List
[Tensor
]) – Existing + additional data for the model, in the same format as for fit.candidate_metadata (
Optional
[List
[List
[Optional
[Dict
[str
,Any
]]]]]) – Model-produced metadata for candidates, in the order corresponding to the Xs.
- Return type
None
-
Discrete Models¶
ax.models.discrete.eb_thompson module¶
-
class
ax.models.discrete.eb_thompson.
EmpiricalBayesThompsonSampler
(num_samples=10000, min_weight=None, uniform_weights=False)[source]¶ Bases:
ax.models.discrete.thompson.ThompsonSampler
Generator for Thompson sampling using Empirical Bayes estimates.
The generator applies positive-part James-Stein Estimator to the data passed in via fit and then performs Thompson Sampling.
ax.models.discrete.full_factorial module¶
-
class
ax.models.discrete.full_factorial.
FullFactorialGenerator
(max_cardinality=100, check_cardinality=True)[source]¶ Bases:
ax.models.discrete_base.DiscreteModel
Generator for full factorial designs.
Generates arms for all possible combinations of parameter values, each with weight 1.
The value of n supplied to gen will be ignored, as the number of arms generated is determined by the list of parameter values. To suppress this warning, use n = -1.
-
gen
(n, parameter_values, objective_weights, outcome_constraints=None, fixed_features=None, pending_observations=None, model_gen_options=None)[source]¶ Generate new candidates.
- Parameters
n (
int
) – Number of candidates to generate.parameter_values (
List
[List
[Union
[str
,bool
,float
,int
,None
]]]) – A list of possible values for each parameter.objective_weights (
Optional
[ndarray
]) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.fixed_features (
Optional
[Dict
[int
,Union
[str
,bool
,float
,int
,None
]]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.pending_observations (
Optional
[List
[List
[List
[Union
[str
,bool
,float
,int
,None
]]]]]) – A list of m lists of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome i.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.
- Return type
Tuple
[List
[List
[Union
[str
,bool
,float
,int
,None
]]],List
[float
],Dict
[str
,Any
]]- Returns
2-element tuple containing
List of n generated points, where each point is represented by a list of parameter values.
List of weights for each of the n points.
-
ax.models.discrete.thompson module¶
-
class
ax.models.discrete.thompson.
ThompsonSampler
(num_samples=10000, min_weight=None, uniform_weights=False)[source]¶ Bases:
ax.models.discrete_base.DiscreteModel
Generator for Thompson sampling.
The generator performs Thompson sampling on the data passed in via fit. Arms are given weight proportional to the probability that they are winners, according to Monte Carlo simulations.
-
fit
(Xs, Ys, Yvars, parameter_values, outcome_names)[source]¶ Fit model to m outcomes.
- Parameters
Xs (
List
[List
[List
[Union
[str
,bool
,float
,int
,None
]]]]) – A list of m lists X of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome.Ys (
List
[List
[float
]]) – The corresponding list of m lists Y, each of length k_i, for each outcome.Yvars (
List
[List
[float
]]) – The variances of each entry in Ys, same shape.parameter_values (
List
[List
[Union
[str
,bool
,float
,int
,None
]]]) – A list of possible values for each parameter.
- Return type
None
-
gen
(n, parameter_values, objective_weights, outcome_constraints=None, fixed_features=None, pending_observations=None, model_gen_options=None)[source]¶ Generate new candidates.
- Parameters
n (
int
) – Number of candidates to generate.parameter_values (
List
[List
[Union
[str
,bool
,float
,int
,None
]]]) – A list of possible values for each parameter.objective_weights (
Optional
[ndarray
]) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.fixed_features (
Optional
[Dict
[int
,Union
[str
,bool
,float
,int
,None
]]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.pending_observations (
Optional
[List
[List
[List
[Union
[str
,bool
,float
,int
,None
]]]]]) – A list of m lists of parameterizations (each parameterization is a list of parameter values of length d), each of length k_i, for each outcome i.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.
- Return type
Tuple
[List
[List
[Union
[str
,bool
,float
,int
,None
]]],List
[float
],Dict
[str
,Any
]]- Returns
2-element tuple containing
List of n generated points, where each point is represented by a list of parameter values.
List of weights for each of the n points.
-
predict
(X)[source]¶ Predict
- Parameters
X (
List
[List
[Union
[str
,bool
,float
,int
,None
]]]) – List of the j parameterizations at which to make predictions.- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(j x m) array of outcome predictions at X.
(j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
NumPy Models¶
ax.models.numpy.randomforest module¶
-
class
ax.models.numpy.randomforest.
RandomForest
(max_features='sqrt', num_trees=500)[source]¶ Bases:
ax.models.numpy_base.NumpyModel
A Random Forest model.
Uses a parametric bootstrap to handle uncertainty in Y.
Can be used to fit data, make predictions, and do cross validation; however gen is not implemented and so this model cannot generate new points.
- Parameters
-
cross_validate
(Xs_train, Ys_train, Yvars_train, X_test)[source]¶ Do cross validation with the given training and test sets.
Training set is given in the same format as to fit. Test set is given in the same format as to predict.
- Parameters
Xs_train (
List
[ndarray
]) – A list of m (k_i x d) feature matrices X. Number of rows k_i can vary from i=1,…,m.Ys_train (
List
[ndarray
]) – The corresponding list of m (k_i x 1) outcome arrays Y, for each outcome.Yvars_train (
List
[ndarray
]) – The variances of each entry in Ys, same shape.X_test (
ndarray
) – (j x d) array of the j points at which to make predictions.
- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(j x m) array of outcome predictions at X.
(j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
fit
(Xs, Ys, Yvars, bounds, task_features, feature_names, metric_names, fidelity_features, candidate_metadata=None)[source]¶ Fit model to m outcomes.
- Parameters
Xs (
List
[ndarray
]) – A list of m (k_i x d) feature matrices X. Number of rows k_i can vary from i=1,…,m.Ys (
List
[ndarray
]) – The corresponding list of m (k_i x 1) outcome arrays Y, for each outcome.Yvars (
List
[ndarray
]) – The variances of each entry in Ys, same shape.bounds (
List
[Tuple
[float
,float
]]) – A list of d (lower, upper) tuples for each column of X.task_features (
List
[int
]) – Columns of X that take integer values and should be treated as task parameters.fidelity_features (
List
[int
]) – Columns of X that should be treated as fidelity parameters.candidate_metadata (
Optional
[List
[List
[Optional
[Dict
[str
,Any
]]]]]) – Model-produced metadata for candidates, in the order corresponding to the Xs.
- Return type
None
-
predict
(X)[source]¶ Predict
- Parameters
X (
ndarray
) – (j x d) array of the j points at which to make predictions.- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(j x m) array of outcome predictions at X.
(j x m x m) array of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
Random Models¶
ax.models.random.base module¶
-
class
ax.models.random.base.
RandomModel
(deduplicate=False, seed=None, generated_points=None)[source]¶ Bases:
ax.models.base.Model
This class specifies the basic skeleton for a random model.
As random generators do not make use of models, they do not implement the fit or predict methods.
These models do not need data, or optimization configs.
To satisfy search space parameter constraints, these models can use rejection sampling. To enable rejection sampling for a subclass, only only _gen_samples needs to be implemented, or alternatively, _gen_unconstrained/gen can be directly implemented.
-
deduplicate
¶ If specified, a single instantiation of the model will not return the same point twice. This flag is used in rejection sampling.
-
scramble
¶ If True, permutes the parameter values among the elements of the Sobol sequence. Default is True.
-
seed
¶ An optional seed value for scrambling.
-
gen
(n, bounds, linear_constraints=None, fixed_features=None, model_gen_options=None, rounding_func=None)[source]¶ Generate new candidates.
- Parameters
n (
int
) – Number of candidates to generate.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X. Defined on [0, 1]^d.linear_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that is passed along to the model.rounding_func (
Optional
[Callable
[[ndarray
],ndarray
]]) – A function that rounds an optimization result appropriately (e.g., according to round-trip transformations).
- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(n x d) array of generated points.
Uniform weights, an n-array of ones for each point.
-
ax.models.random.sobol module¶
-
class
ax.models.random.sobol.
SobolGenerator
(seed=None, deduplicate=False, init_position=0, scramble=True, generated_points=None)[source]¶ Bases:
ax.models.random.base.RandomModel
This class specifies the generation algorithm for a Sobol generator.
As Sobol does not make use of a model, it does not implement the fit or predict methods.
-
deduplicate
¶ If true, a single instantiation of the generator will not return the same point twice.
-
init_position
¶ The initial state of the Sobol generator. Starts at 0 by default.
-
scramble
¶ If True, permutes the parameter values among the elements of the Sobol sequence. Default is True.
-
seed
¶ An optional seed value for scrambling.
-
gen
(n, bounds, linear_constraints=None, fixed_features=None, model_gen_options=None, rounding_func=None)[source]¶ Generate new candidates.
- Parameters
n (
int
) – Number of candidates to generate.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.linear_constraints (
Optional
[Tuple
[ndarray
,ndarray
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.rounding_func (
Optional
[Callable
[[ndarray
],ndarray
]]) – A function that rounds an optimization result appropriately (e.g., according to round-trip transformations) but unused here.
- Return type
Tuple
[ndarray
,ndarray
]- Returns
2-element tuple containing
(n x d) array of generated points.
Uniform weights, an n-array of ones for each point.
-
ax.models.random.uniform module¶
-
class
ax.models.random.uniform.
UniformGenerator
(deduplicate=False, seed=None)[source]¶ Bases:
ax.models.random.base.RandomModel
This class specifies a uniform random generation algorithm.
As a uniform generator does not make use of a model, it does not implement the fit or predict methods.
-
seed
¶ An optional seed value for the underlying PRNG.
-
Torch Models¶
ax.models.torch.botorch module¶
-
class
ax.models.torch.botorch.
BotorchModel
(model_constructor=<function get_and_fit_model>, model_predictor=<function predict_from_model>, acqf_constructor=<function get_NEI>, acqf_optimizer=<function scipy_optimizer>, best_point_recommender=<function recommend_best_observed_point>, refit_on_cv=False, refit_on_update=True, warm_start_refitting=True, **kwargs)[source]¶ Bases:
ax.models.torch_base.TorchModel
Customizable botorch model.
By default, this uses a noisy Expected Improvement acquisition function on top of a model made up of separate GPs, one for each outcome. This behavior can be modified by providing custom implementations of the following components:
a model_constructor that instantiates and fits a model on data
a model_predictor that predicts outcomes using the fitted model
a acqf_constructor that creates an acquisition function from a fitted model
a acqf_optimizer that optimizes the acquisition function
- a best_point_recommender that recommends a current “best” point (i.e.,
what the model recommends if the learning process ended now)
- Parameters
model_constructor (
Callable
[[List
[Tensor
],List
[Tensor
],List
[Tensor
],List
[int
],List
[int
],List
[str
],Optional
[Dict
[str
,Tensor
]],Any
],Model
]) – A callable that instantiates and fits a model on data, with signature as described below.model_predictor (
Callable
[[Model
,Tensor
],Tuple
[Tensor
,Tensor
]]) – A callable that predicts using the fitted model, with signature as described below.acqf_constructor (
Callable
[[Model
,Tensor
,Optional
[Tuple
[Tensor
,Tensor
]],Optional
[Tensor
],Optional
[Tensor
],Any
],AcquisitionFunction
]) – A callable that creates an acquisition function from a fitted model, with signature as described below.acqf_optimizer (
Union
[Callable
[[AcquisitionFunction
,Tensor
,int
,Optional
[List
[Tuple
[Tensor
,Tensor
,float
]]],Optional
[Dict
[int
,float
]],Optional
[Callable
[[Tensor
],Tensor
]],Any
],Tuple
[Tensor
,Tensor
]],Callable
[[List
[AcquisitionFunction
],Tensor
,Optional
[List
[Tuple
[Tensor
,Tensor
,float
]]],Optional
[Dict
[int
,float
]],Optional
[Callable
[[Tensor
],Tensor
]],Any
],Tuple
[Tensor
,Tensor
]]]) – A callable that optimizes the acquisition function, with signature as described below.best_point_recommender (
Callable
[[TorchModel
,List
[Tuple
[float
,float
]],Tensor
,Optional
[Tuple
[Tensor
,Tensor
]],Optional
[Tuple
[Tensor
,Tensor
]],Optional
[Dict
[int
,float
]],Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]],Optional
[Dict
[int
,float
]]],Optional
[Tensor
]]) – A callable that recommends the best point, with signature as described below.refit_on_cv (
bool
) – If True, refit the model for each fold when performing cross-validation.refit_on_update (
bool
) – If True, refit the model after updating the training data using the update method.warm_start_refitting (
bool
) – If True, start model refitting from previous model parameters in order to speed up the fitting process.
Call signatures:
model_constructor( Xs, Ys, Yvars, task_features, fidelity_features, metric_names, state_dict, **kwargs, ) -> model
Here Xs, Ys, Yvars are lists of tensors (one element per outcome), task_features identifies columns of Xs that should be modeled as a task, fidelity_features is a list of ints that specify the positions of fidelity parameters in ‘Xs’, metric_names provides the names of each Y in Ys, state_dict is a pytorch module state dict, and model is a BoTorch Model. Optional kwargs are being passed through from the BotorchModel constructor. This callable is assumed to return a fitted BoTorch model that has the same dtype and lives on the same device as the input tensors.
model_predictor(model, X) -> [mean, cov]
Here model is a fitted botorch model, X is a tensor of candidate points, and mean and cov are the posterior mean and covariance, respectively.
acqf_constructor( model, objective_weights, outcome_constraints, X_observed, X_pending, **kwargs, ) -> acq_function
Here model is a botorch Model, objective_weights is a tensor of weights for the model outputs, outcome_constraints is a tuple of tensors describing the (linear) outcome constraints, X_observed are previously observed points, and X_pending are points whose evaluation is pending. acq_function is a BoTorch acquisition function crafted from these inputs. For additional details on the arguments, see get_NEI.
acqf_optimizer( acq_function, bounds, n, inequality_constraints, fixed_features, rounding_func, **kwargs, ) -> candidates
Here acq_function is a BoTorch AcquisitionFunction, bounds is a tensor containing bounds on the parameters, n is the number of candidates to be generated, inequality_constraints are inequality constraints on parameter values, fixed_features specifies features that should be fixed during generation, and rounding_func is a callback that rounds an optimization result appropriately. candidates is a tensor of generated candidates. For additional details on the arguments, see scipy_optimizer.
best_point_recommender( model, bounds, objective_weights, outcome_constraints, linear_constraints, fixed_features, model_gen_options, target_fidelities, ) -> candidates
Here model is a TorchModel, bounds is a list of tuples containing bounds on the parameters, objective_weights is a tensor of weights for the model outputs, outcome_constraints is a tuple of tensors describing the (linear) outcome constraints, linear_constraints is a tuple of tensors describing constraints on the design, fixed_features specifies features that should be fixed during generation, model_gen_options is a config dictionary that can contain model-specific options, and target_fidelities is a map from fidelity feature column indices to their respective target fidelities, used for multi-fidelity optimization problems. % TODO: refer to an example.
-
Xs
: List[Tensor] = None¶
-
Ys
: List[Tensor] = None¶
-
Yvars
: List[Tensor] = None¶
-
best_point
(bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, model_gen_options=None, target_fidelities=None)[source]¶ Identify the current best point, satisfying the constraints in the same format as to gen.
Return None if no such point can be identified.
- Parameters
bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.objective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value in the best point.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.target_fidelities (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} of fidelity feature column indices to their respective target fidelities. Used for multi-fidelity optimization.
- Return type
Optional
[Tensor
]- Returns
d-tensor of the best point.
-
cross_validate
(Xs_train, Ys_train, Yvars_train, X_test)[source]¶ Do cross validation with the given training and test sets.
Training set is given in the same format as to fit. Test set is given in the same format as to predict.
- Parameters
Xs_train (
List
[Tensor
]) – A list of m (k_i x d) feature tensors X. Number of rows k_i can vary from i=1,…,m.Ys_train (
List
[Tensor
]) – The corresponding list of m (k_i x 1) outcome tensors Y, for each outcome.Yvars_train (
List
[Tensor
]) – The variances of each entry in Ys, same shape.X_test (
Tensor
) – (j x d) tensor of the j points at which to make predictions.
- Return type
Tuple
[Tensor
,Tensor
]- Returns
2-element tuple containing
(j x m) tensor of outcome predictions at X.
(j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
device
: Optional[torch.device] = None¶
-
dtype
: Optional[torch.dtype] = None¶
-
fit
(Xs, Ys, Yvars, bounds, task_features, feature_names, metric_names, fidelity_features, candidate_metadata=None)[source]¶ Fit model to m outcomes.
- Parameters
Xs (
List
[Tensor
]) – A list of m (k_i x d) feature tensors X. Number of rows k_i can vary from i=1,…,m.Ys (
List
[Tensor
]) – The corresponding list of m (k_i x 1) outcome tensors Y, for each outcome.Yvars (
List
[Tensor
]) – The variances of each entry in Ys, same shape.bounds (
List
[Tuple
[float
,float
]]) – A list of d (lower, upper) tuples for each column of X.task_features (
List
[int
]) – Columns of X that take integer values and should be treated as task parameters.fidelity_features (
List
[int
]) – Columns of X that should be treated as fidelity parameters.candidate_metadata (
Optional
[List
[List
[Optional
[Dict
[str
,Any
]]]]]) – Model-produced metadata for candidates, in the order corresponding to the Xs.
- Return type
None
-
gen
(n, bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, pending_observations=None, model_gen_options=None, rounding_func=None, target_fidelities=None)[source]¶ Generate new candidates.
- Parameters
n (
int
) – Number of candidates to generate.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.objective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.pending_observations (
Optional
[List
[Tensor
]]) – A list of m (k_i x d) feature tensors X for m outcomes and k_i pending observations for outcome i.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.rounding_func (
Optional
[Callable
[[Tensor
],Tensor
]]) – A function that rounds an optimization result appropriately (i.e., according to round-trip transformations).target_fidelities (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} of fidelity feature column indices to their respective target fidelities. Used for multi-fidelity optimization.
- Return type
Tuple
[Tensor
,Tensor
,Dict
[str
,Any
],Optional
[List
[Optional
[Dict
[str
,Any
]]]]]- Returns
4-element tuple containing
(n x d) tensor of generated points.
n-tensor of weights for each point.
Generation metadata
- Dictionary of model-specific metadata for the given
generation candidates
-
predict
(X)[source]¶ Predict
- Parameters
X (
Tensor
) – (j x d) tensor of the j points at which to make predictions.- Return type
Tuple
[Tensor
,Tensor
]- Returns
2-element tuple containing
(j x m) tensor of outcome predictions at X.
(j x m x m) tensor of predictive covariances at X. cov[j, m1, m2] is Cov[m1@j, m2@j].
-
update
(Xs, Ys, Yvars, candidate_metadata=None)[source]¶ Update the model.
Updating the model requires both existing and additional data. The data passed into this method will become the new training data.
- Parameters
Xs (
List
[Tensor
]) – Existing + additional data for the model, in the same format as for fit.Ys (
List
[Tensor
]) – Existing + additional data for the model, in the same format as for fit.Yvars (
List
[Tensor
]) – Existing + additional data for the model, in the same format as for fit.candidate_metadata (
Optional
[List
[List
[Optional
[Dict
[str
,Any
]]]]]) – Model-produced metadata for candidates, in the order corresponding to the Xs.
- Return type
None
ax.models.torch.botorch_defaults module¶
-
ax.models.torch.botorch_defaults.
get_NEI
(model, objective_weights, outcome_constraints=None, X_observed=None, X_pending=None, **kwargs)[source]¶ Instantiates a qNoisyExpectedImprovement acquisition function.
- Parameters
objective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)X_observed (
Optional
[Tensor
]) – A tensor containing points observed for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).X_pending (
Optional
[Tensor
]) – A tensor containing points whose evaluation is pending (i.e. that have been submitted for evaluation) present for all objective outcomes and outcomes that appear in the outcome constraints (if there are any).mc_samples – The number of MC samples to use (default: 512).
qmc – If True, use qMC instead of MC (default: True).
prune_baseline – If True, prune the baseline points for NEI (default: True).
- Returns
The instantiated acquisition function.
- Return type
qNoisyExpectedImprovement
-
ax.models.torch.botorch_defaults.
get_and_fit_model
(Xs, Ys, Yvars, task_features, fidelity_features, metric_names, state_dict=None, refit_model=True, **kwargs)[source]¶ Instantiates and fits a botorch ModelListGP using the given data.
- Parameters
Xs (
List
[Tensor
]) – List of X data, one tensor per outcome.Ys (
List
[Tensor
]) – List of Y data, one tensor per outcome.Yvars (
List
[Tensor
]) – List of observed variance of Ys.task_features (
List
[int
]) – List of columns of X that are tasks.fidelity_features (
List
[int
]) – List of columns of X that are fidelity parameters.state_dict (
Optional
[Dict
[str
,Tensor
]]) – If provided, will set model parameters to this state dictionary. Otherwise, will fit the model.refit_model (
bool
) – Flag for refitting model.
- Return type
GPyTorchModel
- Returns
A fitted GPyTorchModel.
-
ax.models.torch.botorch_defaults.
recommend_best_observed_point
(model, bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, model_gen_options=None, target_fidelities=None)[source]¶ A wrapper around ax.models.model_utils.best_observed_point for TorchModel that recommends a best point from previously observed points using either a “max_utility” or “feasible_threshold” strategy.
- Parameters
model (
TorchModel
) – A TorchModel.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.objective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value in the best point.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.target_fidelities (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} of fidelity feature column indices to their respective target fidelities. Used for multi-fidelity optimization.
- Return type
Optional
[Tensor
]- Returns
A d-array of the best point, or None if no feasible point was observed.
-
ax.models.torch.botorch_defaults.
recommend_best_out_of_sample_point
(model, bounds, objective_weights, outcome_constraints=None, linear_constraints=None, fixed_features=None, model_gen_options=None, target_fidelities=None)[source]¶ Identify the current best point by optimizing the posterior mean of the model. This is “out-of-sample” because it considers un-observed designs as well.
Return None if no such point can be identified.
- Parameters
model (
TorchModel
) – A TorchModel.bounds (
List
[Tuple
[float
,float
]]) – A list of (lower, upper) tuples for each column of X.objective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b.linear_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k linear constraints on d-dimensional x, A is (k x d) and b is (k x 1) such that A x <= b.fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value in the best point.model_gen_options (
Optional
[Dict
[str
,Union
[int
,float
,str
,AcquisitionFunction
,Dict
[str
,Any
]]]]) – A config dictionary that can contain model-specific options.target_fidelities (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} of fidelity feature column indices to their respective target fidelities. Used for multi-fidelity optimization.
- Return type
Optional
[Tensor
]- Returns
A d-array of the best point, or None if no feasible point exists.
-
ax.models.torch.botorch_defaults.
scipy_optimizer
(acq_function, bounds, n, inequality_constraints=None, fixed_features=None, rounding_func=None, **kwargs)[source]¶ Optimizer using scipy’s minimize module on a numpy-adpator.
- Parameters
acq_function (
AcquisitionFunction
) – A botorch AcquisitionFunction.bounds (
Tensor
) – A 2 x d-dim tensor, where bounds[0] (bounds[1]) are the lower (upper) bounds of the feasible hyperrectangle.n (
int
) – The number of candidates to generate.constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs
fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.rounding_func (
Optional
[Callable
[[Tensor
],Tensor
]]) – A function that rounds an optimization result appropriately (i.e., according to round-trip transformations).
- Return type
Tuple
[Tensor
,Tensor
]- Returns
2-element tuple containing
A n x d-dim tensor of generated candidates.
In the case of joint optimization, a scalar tensor containing the joint acquisition value of the n points. In the case of sequential optimization, a n-dim tensor of conditional acquisition values, where i-th element is the expected acquisition value conditional on having observed candidates 0,1,…,i-1.
-
ax.models.torch.botorch_defaults.
scipy_optimizer_list
(acq_function_list, bounds, inequality_constraints=None, fixed_features=None, rounding_func=None, **kwargs)[source]¶ Sequential optimizer using scipy’s minimize module on a numpy-adaptor.
The ith acquisition in the sequence uses the ith given acquisition_function.
- Parameters
acq_function_list (
List
[AcquisitionFunction
]) – A list of botorch AcquisitionFunctions, optimized sequentially.bounds (
Tensor
) – A 2 x d-dim tensor, where bounds[0] (bounds[1]) are the lower (upper) bounds of the feasible hyperrectangle.n – The number of candidates to generate.
constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs
fixed_features (
Optional
[Dict
[int
,float
]]) – A map {feature_index: value} for features that should be fixed to a particular value during generation.rounding_func (
Optional
[Callable
[[Tensor
],Tensor
]]) – A function that rounds an optimization result appropriately (i.e., according to round-trip transformations).
- Return type
Tuple
[Tensor
,Tensor
]- Returns
2-element tuple containing
A n x d-dim tensor of generated candidates.
A n-dim tensor of conditional acquisition values, where i-th element is the expected acquisition value conditional on having observed candidates 0,1,…,i-1.
ax.models.torch.utils module¶
-
ax.models.torch.utils.
get_botorch_objective
(model, objective_weights, use_scalarized_objective=True, outcome_constraints=None, X_observed=None)[source]¶ Constructs a BoTorch AcquisitionObjective object.
- Parameters
model (
Model
) – A BoTorch Modelobjective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.use_scalarized_objective (
bool
) – A boolean parameter that defaults to True, specifying whether ScalarizedObjective should be used. NOTE: when using outcome_constraints, use_scalarized_objective will be ignored.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)X_observed (
Optional
[Tensor
]) – Observed points that are feasible and appear in the objective or the constraints. None if there are no such points.
- Returns
ScalarizedObjective, LinearMCOObjective, ConstrainedMCObjective.
- Return type
A BoTorch AcquisitionObjective object. It will be one of
-
ax.models.torch.utils.
get_out_of_sample_best_point_acqf
(model, Xs, X_observed, objective_weights, mc_samples=512, fixed_features=None, fidelity_features=None, target_fidelities=None, outcome_constraints=None, seed_inner=None, qmc=True, **kwargs)[source]¶ Picks an appropriate acquisition function to find the best out-of-sample (predicted by the given surrogate model) point and instantiates it.
NOTE: Typically the appropriate function is the posterior mean, but can differ to account for fidelities etc.
-
ax.models.torch.utils.
is_noiseless
(model)[source]¶ Check if a given (single-task) botorch model is noiseless
- Return type
-
ax.models.torch.utils.
normalize_indices
(indices, d)[source]¶ Normalize a list of indices to ensure that they are positive.
-
ax.models.torch.utils.
pick_best_out_of_sample_point_acqf_class
(Xs, outcome_constraints=None, mc_samples=512, qmc=True, seed_inner=None)[source]¶
-
ax.models.torch.utils.
predict_from_model
(model, X)[source]¶ Predicts outcomes given a model and input tensor.
- Parameters
model (
Model
) – A botorch Model.X (
Tensor
) – A n x d tensor of input parameters.
- Returns
The predicted posterior mean as an n x o-dim tensor. Tensor: The predicted posterior covariance as a n x o x o-dim tensor.
- Return type
Tensor
-
ax.models.torch.utils.
randomize_objective_weights
(objective_weights, **acquisition_function_kwargs)[source]¶ Generate a random weighting based on acquisition function settings.
- Parameters
objective_weights (
Tensor
) – Base weights to multiply by random values..**acquisition_function_kwargs – Kwargs containing weight generation algorithm options.
- Return type
Tensor
- Returns
A normalized list of indices such that each index is between 0 and d-1.
-
ax.models.torch.utils.
subset_model
(model, objective_weights, outcome_constraints=None)[source]¶ Subset a botorch model to the outputs used in the optimization.
- Parameters
model (
Model
) – A BoTorch Model. If the model does not implement the subset_outputs method, this function is a null-op and returns the input arguments.objective_weights (
Tensor
) – The objective is to maximize a weighted sum of the columns of f(x). These are the weights.outcome_constraints (
Optional
[Tuple
[Tensor
,Tensor
]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is (k x m) and b is (k x 1) such that A f(x) <= b. (Not used by single task models)
- Return type
- Returns
A three-tuple of model, objective_weights, and outcome_constraints, all subset to only those outputs that appear in either the objective weights or the outcome constraints.