ax.modelbridge¶
Generation Strategy, Registry, and Factory¶
Generation Strategy¶
- class ax.modelbridge.generation_strategy.GenerationStrategy(steps: list[GenerationStep] | None = None, name: str | None = None, nodes: list[GenerationNode] | None = None)[source]¶
Bases:
GenerationStrategyInterface
GenerationStrategy describes which model should be used to generate new points for which trials, enabling and automating use of different models throughout the optimization process. For instance, it allows to use one model for the initialization trials, and another one for all subsequent trials. In the general case, this allows to automate use of an arbitrary number of models to generate an arbitrary numbers of trials described in the trials_per_model argument.
- Parameters:
nodes – A list of GenerationNode. Each GenerationNode in the list represents a single node in a GenerationStrategy which, when composed of GenerationNodes, can be conceptualized as a graph instead of a linear list. TransitionCriterion defined in each GenerationNode represent the edges in the GenerationStrategy graph. GenerationNodes are more flexible than GenerationSteps and new GenerationStrategies should use nodes. Notably, either, but not both, of nodes and steps must be provided.
steps – A list of GenerationStep describing steps of this strategy.
name – An optional name for this generation strategy. If not specified, strategy’s name will be names of its nodes’ models joined with ‘+’.
- clone_reset() GenerationStrategy [source]¶
Copy this generation strategy without it’s state.
- current_generator_run_limit() tuple[int, bool] [source]¶
First check if we can move the generation strategy to the next node, which is safe, as the next call to
gen
will just pick up from there. Then determine how many generator runs this generation strategy can generate right now, assuming each one of them becomes its own trial, and whether optimization is completed.- Returns: a two-item tuple of:
the number of generator runs that can currently be produced, with -1 meaning unlimited generator runs,
whether optimization is completed and the generation strategy cannot generate any more generator runs at all.
- property current_node: GenerationNode¶
Current generation node.
- property current_step: GenerationStep¶
Current generation step.
- property current_step_index: int¶
Returns the index of the current generation step. This attribute is replaced by node_name in newer GenerationStrategies but surfaced here for backward compatibility.
- property experiment: Experiment¶
Experiment, currently set on this generation strategy.
- gen(experiment: Experiment, data: Data | None = None, pending_observations: dict[str, list[ObservationFeatures]] | None = None, n: int = 1, fixed_features: ObservationFeatures | None = None) GeneratorRun [source]¶
Produce the next points in the experiment. Additional kwargs passed to this method are propagated directly to the underlying model’s gen, along with the model_gen_kwargs set on the current generation node.
NOTE: Each generator run returned from this function must become a single trial on the experiment to comply with assumptions made in generation strategy. Do not split one generator run produced from generation strategy into multiple trials (never making a generator run into a trial is allowed).
- Parameters:
experiment – Experiment, for which the generation strategy is producing a new generator run in the course of gen, and to which that generator run will be added as trial(s). Information stored on the experiment (e.g., trial statuses) is used to determine which model will be used to produce the generator run returned from this method.
data – Optional data to be passed to the underlying model’s gen, which is called within this method and actually produces the resulting generator run. By default, data is all data on the experiment.
n – Integer representing how many arms should be in the generator run produced by this method. NOTE: Some underlying models may ignore the n and produce a model-determined number of arms. In that case this method will also output a generator run with number of arms that can differ from n.
pending_observations – A map from metric name to pending observations for that metric, used by some models to avoid resuggesting points that are currently being evaluated.
- gen_for_multiple_trials_with_multiple_models(experiment: Experiment, data: Data | None = None, pending_observations: dict[str, list[ObservationFeatures]] | None = None, n: int | None = None, fixed_features: ObservationFeatures | None = None, num_trials: int = 1, arms_per_node: dict[str, int] | None = None) list[list[GeneratorRun]] [source]¶
Produce GeneratorRuns for multiple trials at once with the possibility of using multiple models per trial, getting multiple GeneratorRuns per trial.
- Parameters:
experiment –
Experiment
, for which the generation strategy is producing a new generator run in the course ofgen
, and to which that generator run will be added as trial(s). Information stored on the experiment (e.g., trial statuses) is used to determine which model will be used to produce the generator run returned from this method.data – Optional data to be passed to the underlying model’s
gen
, which is called within this method and actually produces the resulting generator run. By default, data is all data on theexperiment
.pending_observations – A map from metric name to pending observations for that metric, used by some models to avoid resuggesting points that are currently being evaluated.
n – Integer representing how many total arms should be in the generator runs produced by this method. NOTE: Some underlying models may ignore the n and produce a model-determined number of arms. In that case this method will also output generator runs with number of arms that can differ from n.
fixed_features – An optional set of
ObservationFeatures
that will be passed down to the underlying models. Note: if provided this will override any algorithmically determined fixed features so it is important to specify all necessary fixed features.num_trials – Number of trials to generate generator runs for in this call. If not provided, defaults to 1.
arms_per_node – An optional map from node name to the number of arms to generate from that node. If not provided, will default to the number of arms specified in the node’s
InputConstructors
or n if noInputConstructors
are defined on the node. We expect either n or arms_per_node to be provided, but not both, and this is an advanced argument that should only be used by advanced users.
- Returns:
A list of lists of lists generator runs. Each outer list represents a trial being suggested and each inner list represents a generator run for that trial.
- property is_node_based: bool¶
Whether this strategy consists of GenerationNodes only. This is useful for determining initialization properties and other logic.
- property last_generator_run: GeneratorRun | None¶
Latest generator run produced by this generation strategy. Returns None if no generator runs have been produced yet.
- property model: ModelBridge | None¶
Current model in this strategy. Returns None if no model has been set yet (i.e., if no generator runs have been produced from this GS).
- property model_transitions: list[int]¶
[DEPRECATED]List of trial indices where a transition happened from one model to another.
- property name: str¶
Name of this generation strategy. Defaults to a combination of model names provided in generation steps, set at the time of the
GenerationStrategy
creation.
- property nodes_dict: dict[str, GenerationNode]¶
Returns a dictionary mapping node names to nodes.
- property optimization_complete: bool¶
Checks whether all nodes are completed in the generation strategy.
- property trials_as_df: DataFrame | None¶
Puts information on individual trials into a data frame for easy viewing.
For example for a GenerationStrategy composed of GenerationSteps: Gen. Step | Models | Trial Index | Trial Status | Arm Parameterizations [0] | [Sobol] | 0 | RUNNING | {“0_0”:{“x”:9.17…}}
Generation Node¶
- class ax.modelbridge.generation_node.GenerationNode(node_name: str, model_specs: list[ModelSpec], best_model_selector: BestModelSelector | None = None, should_deduplicate: bool = False, transition_criteria: Sequence[TransitionCriterion] | None = None, input_constructors: None | dict[InputConstructorPurpose, NodeInputConstructors] = None, previous_node_name: str | None = None, trial_type: str | None = None, should_skip: bool = False)[source]¶
Bases:
SerializationMixin
,SortableBase
Base class for GenerationNode, capable of fitting one or more model specs under the hood and generating candidates from them.
- Parameters:
node_name – A unique name for the GenerationNode. Used for storage purposes.
model_specs – A list of ModelSpecs to be selected from for generation in this GenerationNode.
best_model_selector – A
BestModelSelector
used to select theModelSpec
to generate from inGenerationNode
with multiple ``ModelSpec``s.should_deduplicate – Whether to deduplicate the parameters of proposed arms against those of previous arms via rejection sampling. If this is True, the GenerationStrategy will discard generator runs produced from the GenerationNode that has should_deduplicate=True if they contain arms already present on the experiment and replace them with new generator runs. If no generator run with entirely unique arms could be produced in 5 attempts, a GenerationStrategyRepeatedPoints error will be raised, as we assume that the optimization converged when the model can no longer suggest unique arms.
transition_criteria – List of TransitionCriterion, each of which describes a condition that must be met before completing a GenerationNode. All is_met must evaluateTrue for the GenerationStrategy to move on to the next GenerationNode.
input_constructors – A dictionary mapping input constructor purpose enum to the input constructor enum. Each input constructor maps to a method which encodes the logic for determining dynamic inputs to the
GenerationNode
trial_type – Specifies the type of trial to generate, is limited to either
Keys.SHORT_RUN
orKeys.LONG_RUN
for now. If not specified, will default to None and not be used during generation.previous_node_name – The previous
GenerationNode
name in theGenerationStrategy
, if any. Initialized to None for all nodes, and is set during transition from oneGenerationNode
to the next. Can be overwritten if multiple transitions occur between nodes, and will always store the most recent previousGenerationNode
name.should_skip – Whether to skip this node during generation time. Defaults to False, and can only currently be set to True via
NodeInputConstructors
Note for developers: by “model” here we really mean an Ax ModelBridge object, which contains an Ax Model under the hood. We call it “model” here to simplify and focus on explaining the logic of GenerationStep and GenerationStrategy.
- property experiment: Experiment¶
Returns the experiment associated with this GenerationStrategy
- fit(experiment: Experiment, data: Data, search_space: SearchSpace | None = None, optimization_config: OptimizationConfig | None = None, **kwargs: Any) None [source]¶
Fits the specified models to the given experiment + data using the model kwargs set on each corresponding model spec and the kwargs passed to this method.
- Parameters:
experiment – The experiment to fit the model to.
data – The experiment data used to fit the model.
search_space – An optional overwrite for the experiment search space.
optimization_config – An optional overwrite for the experiment optimization config.
kwargs – Additional keyword arguments to pass to the model’s
fit
method. NOTE: Local kwargs take precedence over the ones stored inModelSpec.model_kwargs
.
- gen(n: int | None = None, pending_observations: dict[str, list[ObservationFeatures]] | None = None, max_gen_draws_for_deduplication: int = 5, arms_by_signature_for_deduplication: dict[str, Arm] | None = None, **model_gen_kwargs: Any) GeneratorRun [source]¶
This method generates candidates using self._gen and handles deduplication of generated candidates if self.should_deduplicate=True.
NOTE: Models must have been fit prior to calling
gen
. NOTE: Some underlying models may ignore then
argument and produce amodel-determined number of arms. In that case this method will also output a generator run with number of arms that may differ from
n
.- Parameters:
n – Optional integer representing how many arms should be in the generator run produced by this method. When this is
None
,n
will be determined by theModelSpec
that we are generating from.pending_observations – A map from metric name to pending observations for that metric, used by some models to avoid resuggesting points that are currently being evaluated.
max_gen_draws_for_deduplication – Maximum number of attempts for generating new candidates without duplicates. If non-duplicate candidates are not generated with these attempts, a
GenerationStrategyRepeatedPoints
exception will be raised.arms_by_signature_for_deduplication – A dictionary mapping arm signatures to the arms, to be used for deduplicating newly generated arms.
model_gen_kwargs – Keyword arguments, passed through to
ModelSpec.gen
; these override any pre-specified inModelSpec.model_gen_kwargs
.
- Returns:
A
GeneratorRun
containing the newly generated candidates.
- property generation_strategy: GenerationStrategy¶
Returns a backpointer to the GenerationStrategy, useful for obtaining the experiment associated with this GenerationStrategy
- generator_run_limit(raise_generation_errors: bool = False) int [source]¶
How many generator runs can this generation strategy generate right now, assuming each one of them becomes its own trial. Only considers transition_criteria that are TrialBasedCriterion.
- Returns:
The number of generator runs that can currently be produced, with -1 meaning unlimited generator runs.
- property input_constructors: dict[InputConstructorPurpose, NodeInputConstructors]¶
Returns the input constructors that will be used to determine any dynamic inputs to this
GenerationNode
.
- property is_completed: bool¶
Returns True if this GenerationNode is complete and should transition to the next node.
- property model_spec_to_gen_from: ModelSpec¶
Returns the cached _model_spec_to_gen_from or gets it from _pick_fitted_model_to_gen_from and then caches and returns it
- property model_to_gen_from_name: str | None¶
Returns the name of the model that will be used for gen, if there is one. Otherwise, returns None.
- property node_that_generated_last_gr: str | None¶
Returns the name of the node that generated the last generator run.
- Returns:
The name of the node that generated the last generator run.
- Return type:
- property previous_node: GenerationNode | None¶
Returns the previous
GenerationNode
, if any.
- should_transition_to_next_node(raise_data_required_error: bool = True) tuple[bool, str] [source]¶
Checks whether we should transition to the next node based on this node’s TransitionCriterion.
Important: This method relies on the
transition_criterion
of this node to be listed in order of importance. Ex: a fallback transition should come after the primary transition in the transition criterion list.- Parameters:
raise_data_required_error – Whether to raise
DataRequiredError
in the case detailed above. Not raising the error is useful if just looking to check how many generator runs (to be made into trials) can be produced, but not actually producing them yet.- Returns:
- Whether we should transition to the next node
and the name of the node to gen from (either the current or next node)
- Return type:
- property transition_criteria: Sequence[TransitionCriterion]¶
Returns the sequence of TransitionCriteria that will be used to determine if this GenerationNode is complete and should transition to the next node.
- property transition_edges: dict[str, list[TransitionCriterion]]¶
Returns a dictionary mapping the next
GenerationNode
to the TransitionCriteria that define the transition that that node.Ex: if the transition from the current node to node x is defined by IsSingleObjective and MinTrials criterion then the return would be {‘x’: [IsSingleObjective, MinTrials]}.
- Returns:
A dictionary mapping the next
GenerationNode
to theTransitionCriterion
that are associated with it.- Return type:
Dict[str, List[TransitionCriterion]]
- class ax.modelbridge.generation_node.GenerationStep(model: ModelRegistryBase | Callable[[...], ModelBridge], num_trials: int, model_kwargs: dict[str, Any] | None = None, model_gen_kwargs: dict[str, Any] | None = None, completion_criteria: Sequence[TransitionCriterion] | None = None, min_trials_observed: int = 0, max_parallelism: int | None = None, enforce_num_trials: bool = True, should_deduplicate: bool = False, model_name: str | None = None, use_update: bool = False, index: int = -1)[source]¶
Bases:
GenerationNode
,SortableBase
One step in the generation strategy, corresponds to a single model. Describes the model, how many trials will be generated with this model, what minimum number of observations is required to proceed to the next model, etc.
NOTE: Model can be specified either from the model registry (ax.modelbridge.registry.Models or using a callable model constructor. Only models from the registry can be saved, and thus optimization can only be resumed if interrupted when using models from the registry.
- Parameters:
model – A member of Models enum or a callable returning an instance of ModelBridge with an instantiated underlying Model. Refer to ax/modelbridge/factory.py for examples of such callables.
num_trials – How many trials to generate with the model from this step. If set to -1, trials will continue to be generated from this model as long as generation_strategy.gen is called (available only for the last of the generation steps).
min_trials_observed – How many trials must be completed before the generation strategy can proceed to the next step. Defaults to 0. If num_trials of a given step have been generated but min_trials_ observed have not been completed, a call to generation_strategy.gen will fail with a DataRequiredError.
max_parallelism – How many trials generated in the course of this step are allowed to be run (i.e. have trial.status of RUNNING) simultaneously. If max_parallelism trials from this step are already running, a call to generation_strategy.gen will fail with a MaxParallelismReached Exception, indicating that more trials need to be completed before generating and running next trials.
use_update – DEPRECATED.
enforce_num_trials – Whether to enforce that only num_trials are generated from the given step. If False and num_trials have been generated, but min_trials_observed have not been completed, generation_strategy.gen will continue generating trials from the current step, exceeding num_ trials for it. Allows to avoid DataRequiredError, but delays proceeding to next generation step.
model_kwargs – Dictionary of kwargs to pass into the model constructor on instantiation. E.g. if model is Models.SOBOL, kwargs will be applied as Models.SOBOL(**model_kwargs); if model is get_sobol, get_sobol( **model_kwargs). NOTE: if generation strategy is interrupted and resumed from a stored snapshot and its last used model has state saved on its generator runs, model_kwargs is updated with the state dict of the model, retrieved from the last generator run of this generation strategy.
model_gen_kwargs – Each call to generation_strategy.gen performs a call to the step’s model’s gen under the hood; model_gen_kwargs will be passed to the model’s gen like so: model.gen(**model_gen_kwargs).
completion_criteria – List of TransitionCriterion. All is_met must evaluate True for the GenerationStrategy to move on to the next Step
index – Index of this generation step, for use internally in Generation Strategy. Do not assign as it will be reassigned when instantiating GenerationStrategy with a list of its steps.
should_deduplicate – Whether to deduplicate the parameters of proposed arms against those of previous arms via rejection sampling. If this is True, the generation strategy will discard generator runs produced from the generation step that has should_deduplicate=True if they contain arms already present on the experiment and replace them with new generator runs. If no generator run with entirely unique arms could be produced in 5 attempts, a GenerationStrategyRepeatedPoints error will be raised, as we assume that the optimization converged when the model can no longer suggest unique arms.
model_name – Optional name of the model. If not specified, defaults to the model key of the model spec.
Note for developers: by “model” here we really mean an Ax ModelBridge object, which contains an Ax Model under the hood. We call it “model” here to simplify and focus on explaining the logic of GenerationStep and GenerationStrategy.
- gen(n: int | None = None, pending_observations: dict[str, list[ObservationFeatures]] | None = None, max_gen_draws_for_deduplication: int = 5, arms_by_signature_for_deduplication: dict[str, Arm] | None = None, **model_gen_kwargs: Any) GeneratorRun [source]¶
This method generates candidates using self._gen and handles deduplication of generated candidates if self.should_deduplicate=True.
NOTE: Models must have been fit prior to calling
gen
. NOTE: Some underlying models may ignore then
argument and produce amodel-determined number of arms. In that case this method will also output a generator run with number of arms that may differ from
n
.- Parameters:
n – Optional integer representing how many arms should be in the generator run produced by this method. When this is
None
,n
will be determined by theModelSpec
that we are generating from.pending_observations – A map from metric name to pending observations for that metric, used by some models to avoid resuggesting points that are currently being evaluated.
max_gen_draws_for_deduplication – Maximum number of attempts for generating new candidates without duplicates. If non-duplicate candidates are not generated with these attempts, a
GenerationStrategyRepeatedPoints
exception will be raised.arms_by_signature_for_deduplication – A dictionary mapping arm signatures to the arms, to be used for deduplicating newly generated arms.
model_gen_kwargs – Keyword arguments, passed through to
ModelSpec.gen
; these override any pre-specified inModelSpec.model_gen_kwargs
.
- Returns:
A
GeneratorRun
containing the newly generated candidates.
External Generation Node¶
- class ax.modelbridge.external_generation_node.ExternalGenerationNode(node_name: str, should_deduplicate: bool = True, transition_criteria: Sequence[TransitionCriterion] | None = None)[source]¶
Bases:
GenerationNode
,ABC
A generation node intended to be used with non-Ax methods for candidate generation.
To leverage external methods for candidate generation, the user must create a subclass that implements
update_generator_state
andget_next_candidate
methods. This can then be provided as a node into aGenerationStrategy
, either as standalone or as part of a larger generation strategy with other generation nodes, e.g., with a Sobol node for initialization.Example: >>> class MyExternalGenerationNode(ExternalGenerationNode): >>> … >>> generation_strategy = GenerationStrategy( >>> nodes = [MyExternalGenerationNode(…)] >>> ) >>> ax_client = AxClient(generation_strategy=generation_strategy) >>> ax_client.create_experiment(…) >>> ax_client.get_next_trial() # Generates trials using the new generation node.
- fit(experiment: Experiment, data: Data, search_space: SearchSpace | None = None, optimization_config: OptimizationConfig | None = None, **kwargs: Any) None [source]¶
A method used to initialize or update the experiment state / data on any surrogate models or predictors used during candidate generation.
This method records the time spent during the update and defers to update_generator_state for the actual work.
- Parameters:
experiment – The experiment to fit the surrogate model / predictor to.
data – The experiment data used to fit the model.
search_space – UNSUPPORTED. An optional override for the experiment search space.
optimization_config – UNSUPPORTED. An optional override for the experiment optimization config.
kwargs – UNSUPPORTED. Additional keyword arguments for model fitting.
- abstract get_next_candidate(pending_parameters: list[dict[str, None | str | bool | float | int]]) dict[str, None | str | bool | float | int] [source]¶
Get the parameters for the next candidate configuration to evaluate.
- Parameters:
pending_parameters – A list of parameters of the candidates pending evaluation. This is often used to avoid generating duplicate candidates.
- Returns:
A dictionary mapping parameter names to parameter values for the next candidate suggested by the method.
- property model_spec_to_gen_from: None¶
Returns the cached _model_spec_to_gen_from or gets it from _pick_fitted_model_to_gen_from and then caches and returns it
- abstract update_generator_state(experiment: Experiment, data: Data) None [source]¶
A method used to update the state of the generator. This includes any models, predictors or any other custom state used by the generation node. This method will be called with the up-to-date experiment and data before
get_next_candidate
is called to generate the next trial(s). Note thatget_next_candidate
may be called multiple times (to generate multiple candidates) after a call toupdate_generator_state
.- Parameters:
experiment – The
Experiment
object representing the current state of the experiment. The key properties includestrials
,search_space
, andoptimization_config
. The data is provided as a separate arg.data – The data / metrics collected on the experiment so far.
Transition Criterion .. automodule:: ax.modelbridge.transition_criterion
- members:
- undoc-members:
- show-inheritance:
Generation Node Input Constructors .. automodule:: ax.modelbridge.generation_node_input_constructors
- members:
- undoc-members:
- show-inheritance:
Registry¶
Module containing a registry of standard models (and generators, samplers etc.) such as Sobol generator, GP+EI, Thompson sampler, etc.
Use of Models enum allows for serialization and reinstantiation of models and generation strategies from generator runs they produced. To reinstantiate a model from generator run, use get_model_from_generator_run utility from this module.
- class ax.modelbridge.registry.ModelRegistryBase(value)[source]¶
Bases:
Enum
Base enum that provides instrumentation of __call__ on enum values, for enums that link their values to ModelSetup-s like Models.
- property model_bridge_class: type[ModelBridge]¶
Type of ModelBridge used for the given model+bridge setup.
- view_defaults() tuple[dict[str, Any], dict[str, Any]] [source]¶
Obtains the default keyword arguments for the model and the modelbridge specified through the Models enum, for ease of use in notebook environment, since models and bridges cannot be inspected directly through the enum.
- Returns:
A tuple of default keyword arguments for the model and the model bridge.
- class ax.modelbridge.registry.ModelSetup(bridge_class: type[ModelBridge], model_class: type[Model], transforms: list[type[Transform]], default_model_kwargs: dict[str, Any] | None = None, standard_bridge_kwargs: dict[str, Any] | None = None, not_saved_model_kwargs: list[str] | None = None)[source]¶
Bases:
NamedTuple
A model setup defines a coupled combination of a model, a model bridge, standard set of transforms, and standard model bridge keyword arguments. This coupled combination yields a given standard modeling strategy in Ax, such as BoTorch GP+EI, a Thompson sampler, or a Sobol quasirandom generator.
- bridge_class: type[ModelBridge]¶
Alias for field number 0
- class ax.modelbridge.registry.Models(value)[source]¶
Bases:
ModelRegistryBase
Registry of available models.
Uses MODEL_KEY_TO_MODEL_SETUP to retrieve settings for model and model bridge, by the key stored in the enum value.
To instantiate a model in this enum, simply call an enum member like so: Models.SOBOL(search_space=search_space) or Models.BOTORCH(experiment=experiment, data=data). Keyword arguments specified to the call will be passed into the model or the model bridge constructors according to their keyword.
For instance, Models.SOBOL(search_space=search_space, scramble=False) will instantiate a RandomModelBridge(search_space=search_space) with a SobolGenerator(scramble=False) underlying model.
NOTE: If you deprecate a model, please add its replacement to ax.storage.json_store.decoder._DEPRECATED_MODEL_TO_REPLACEMENT to ensure backwards compatibility of the storage layer.
- BOTORCH_MODULAR = 'BoTorch'¶
- BO_MIXED = 'BO_MIXED'¶
- CONTEXT_SACBO = 'Contextual_SACBO'¶
- EMPIRICAL_BAYES_THOMPSON = 'EB'¶
- FACTORIAL = 'Factorial'¶
- LEGACY_BOTORCH = 'Legacy_GPEI'¶
- SAASBO = 'SAASBO'¶
- SAAS_MTGP = 'SAAS_MTGP'¶
- SOBOL = 'Sobol'¶
- ST_MTGP = 'ST_MTGP'¶
- THOMPSON = 'Thompson'¶
- UNIFORM = 'Uniform'¶
- ax.modelbridge.registry.get_model_from_generator_run(generator_run: GeneratorRun, experiment: Experiment, data: Data, models_enum: type[ModelRegistryBase], after_gen: bool = True) ModelBridge [source]¶
Reinstantiate a model from model key and kwargs stored on a given generator run, with the given experiment and the data to initialize the model with.
Note: requires that the model that was used to get the generator run, is part of the Models registry enum.
- Parameters:
generator_run – A GeneratorRun created by the model we are looking to reinstantiate.
experiment – The experiment for which the model is reinstantiated.
data – Data, with which to reinstantiate the model.
models_enum – Subclass of Models registry, from which to obtain the settings of the model. Useful only if the generator run was created via a model that could not be included into the main registry, but can still be represented as a ModelSetup and was added to a registry that extends Models.
after_gen – Whether to reinstantiate the model in the state, in which it was after it created this generator run, as opposed to before. Defaults to True, useful when reinstantiating the model to resume optimization, rather than to recreate its state at the time of generation. TO recreate state at the time of generation, set to False.
Factory¶
- ax.modelbridge.factory.DEFAULT_EHVI_BATCH_LIMIT = 5¶
Module containing functions that generate standard models, such as Sobol, GP+EI, etc.
Note: a special case here is a composite generator, which requires an additional
GenerationStrategy
and is able to delegate work to multiple models (for instance, to a random model to generate the first trial, and to an optimization model for subsequent trials).
- ax.modelbridge.factory.get_botorch(experiment: ~ax.core.experiment.Experiment, data: ~ax.core.data.Data, search_space: ~ax.core.search_space.SearchSpace | None = None, dtype: ~torch.dtype = torch.float64, device: ~torch.device = device(type='cpu'), transforms: list[type[~ax.modelbridge.transforms.base.Transform]] = [<class 'ax.modelbridge.transforms.fill_missing_parameters.FillMissingParameters'>, <class 'ax.modelbridge.transforms.remove_fixed.RemoveFixed'>, <class 'ax.modelbridge.transforms.choice_encode.OrderedChoiceToIntegerRange'>, <class 'ax.modelbridge.transforms.one_hot.OneHot'>, <class 'ax.modelbridge.transforms.int_to_float.IntToFloat'>, <class 'ax.modelbridge.transforms.log.Log'>, <class 'ax.modelbridge.transforms.logit.Logit'>, <class 'ax.modelbridge.transforms.unit_x.UnitX'>, <class 'ax.modelbridge.transforms.ivw.IVW'>, <class 'ax.modelbridge.transforms.derelativize.Derelativize'>, <class 'ax.modelbridge.transforms.standardize_y.StandardizeY'>], transform_configs: dict[str, dict[str, int | float | str | ~botorch.acquisition.acquisition.AcquisitionFunction | list[str] | dict[int, ~typing.Any] | dict[str, ~typing.Any] | ~ax.core.optimization_config.OptimizationConfig | ~ax.models.winsorization_config.WinsorizationConfig | None]] | None = None, model_constructor: ~collections.abc.Callable[[list[~torch.Tensor], list[~torch.Tensor], list[~torch.Tensor], list[int], list[int], list[str], dict[str, ~torch.Tensor] | None, ~typing.Any], ~botorch.models.model.Model] = <function get_and_fit_model>, model_predictor: ~collections.abc.Callable[[~botorch.models.model.Model, ~torch.Tensor, bool], tuple[~torch.Tensor, ~torch.Tensor]] = <function predict_from_model>, acqf_constructor: ~ax.models.torch.botorch_defaults.TAcqfConstructor = <function get_qLogNEI>, acqf_optimizer: ~collections.abc.Callable[[~botorch.acquisition.acquisition.AcquisitionFunction, ~torch.Tensor, int, list[tuple[~torch.Tensor, ~torch.Tensor, float]] | None, list[tuple[~torch.Tensor, ~torch.Tensor, float]] | None, dict[int, float] | None, ~collections.abc.Callable[[~torch.Tensor], ~torch.Tensor] | None, ~typing.Any], tuple[~torch.Tensor, ~torch.Tensor]] = <function scipy_optimizer>, refit_on_cv: bool = False, optimization_config: ~ax.core.optimization_config.OptimizationConfig | None = None) TorchModelBridge [source]¶
Instantiates a BotorchModel.
- ax.modelbridge.factory.get_empirical_bayes_thompson(experiment: Experiment, data: Data, search_space: SearchSpace | None = None, num_samples: int = 10000, min_weight: float | None = None, uniform_weights: bool = False) DiscreteModelBridge [source]¶
Instantiates an empirical Bayes / Thompson sampling model.
- ax.modelbridge.factory.get_factorial(search_space: SearchSpace) DiscreteModelBridge [source]¶
Instantiates a factorial generator.
- ax.modelbridge.factory.get_sobol(search_space: SearchSpace, seed: int | None = None, deduplicate: bool = False, init_position: int = 0, scramble: bool = True, fallback_to_sample_polytope: bool = False) RandomModelBridge [source]¶
Instantiates a Sobol sequence quasi-random generator.
- Parameters:
search_space – Sobol generator search space.
kwargs – Custom args for sobol generator.
- Returns:
RandomModelBridge, with SobolGenerator as model.
- ax.modelbridge.factory.get_thompson(experiment: Experiment, data: Data, search_space: SearchSpace | None = None, num_samples: int = 10000, min_weight: float | None = None, uniform_weights: bool = False) DiscreteModelBridge [source]¶
Instantiates a Thompson sampling model.
- ax.modelbridge.factory.get_uniform(search_space: SearchSpace, deduplicate: bool = False, seed: int | None = None) RandomModelBridge [source]¶
Instantiate uniform generator.
- Parameters:
search_space – Uniform generator search space.
kwargs – Custom args for uniform generator.
- Returns:
RandomModelBridge, with UniformGenerator as model.
ModelSpec¶
- class ax.modelbridge.model_spec.FactoryFunctionModelSpec(model_enum: 'ModelRegistryBase | None' = None, model_kwargs: 'dict[str, Any]' = <factory>, model_gen_kwargs: 'dict[str, Any]' = <factory>, model_cv_kwargs: 'dict[str, Any]' = <factory>, model_key_override: 'str | None' = None, _fitted_model: 'ModelBridge | None' = None, _cv_results: 'list[CVResult] | None' = None, _diagnostics: 'CVDiagnostics | None' = None, _last_cv_kwargs: 'dict[str, Any] | None' = None, _last_fit_arg_ids: 'dict[str, int] | None' = None, factory_function: 'TModelFactory | None' = None)[source]¶
Bases:
ModelSpec
- factory_function: Callable[[...], ModelBridge] | None = None¶
- fit(experiment: Experiment, data: Data, search_space: SearchSpace | None = None, optimization_config: OptimizationConfig | None = None, **model_kwargs: Any) None [source]¶
Fits the specified model on the given experiment + data using the model kwargs set on the model spec, alongside any passed down as kwargs to this function (local kwargs take precedent)
- model_enum: ModelRegistryBase | None = None¶
- class ax.modelbridge.model_spec.ModelSpec(model_enum: 'ModelRegistryBase', model_kwargs: 'dict[str, Any]' = <factory>, model_gen_kwargs: 'dict[str, Any]' = <factory>, model_cv_kwargs: 'dict[str, Any]' = <factory>, model_key_override: 'str | None' = None, _fitted_model: 'ModelBridge | None' = None, _cv_results: 'list[CVResult] | None' = None, _diagnostics: 'CVDiagnostics | None' = None, _last_cv_kwargs: 'dict[str, Any] | None' = None, _last_fit_arg_ids: 'dict[str, int] | None' = None)[source]¶
Bases:
SortableBase
,SerializationMixin
- copy() ModelSpec [source]¶
ModelSpec is both a spec and an object that performs actions. Copying is useful to avoid changes to a singleton model spec.
- cross_validate(model_cv_kwargs: dict[str, Any] | None = None) tuple[list[CVResult] | None, dict[str, dict[str, float]] | None] [source]¶
Call cross_validate, compute_diagnostics and cache the results. If the model cannot be cross validated, warn and return None.
NOTE: If there are cached results, and the cache was computed using the same kwargs, this will return the cached results.
- Parameters:
model_cv_kwargs – Optional kwargs to pass into cross_validate call. These are combined with self.model_cv_kwargs, with the model_cv_kwargs taking precedence over self.model_cv_kwargs.
- Returns:
A tuple of CV results (observed vs predicted values) and the corresponding diagnostics.
- property cv_results: list[CVResult] | None¶
Cached CV results from self.cross_validate() if it has been successfully called
- property diagnostics: dict[str, dict[str, float]] | None¶
Cached CV diagnostics from self.cross_validate() if it has been successfully called
- fit(experiment: Experiment, data: Data, **model_kwargs: Any) None [source]¶
Fits the specified model on the given experiment + data using the model kwargs set on the model spec, alongside any passed down as kwargs to this function (local kwargs take precedent)
- property fitted_model: ModelBridge¶
Returns the fitted Ax model, asserting fit() was called
- property fixed_features: ObservationFeatures | None¶
Fixed generation features to pass into the Model’s .gen function.
- gen(**model_gen_kwargs: Any) GeneratorRun [source]¶
Generates candidates from the fitted model, using the model gen kwargs set on the model spec, alongside any passed as kwargs to this function (local kwargs take precedent)
NOTE: Model must have been fit prior to calling gen()
- Parameters:
n – Integer representing how many arms should be in the generator run produced by this method. NOTE: Some underlying models may ignore the
n
and produce a model-determined number of arms. In that case this method will also output a generator run with number of arms that can differ fromn
.pending_observations – A map from metric name to pending observations for that metric, used by some models to avoid resuggesting points that are currently being evaluated.
- model_enum: ModelRegistryBase¶
- class ax.modelbridge.model_spec.ModelSpecJSONEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]¶
Bases:
JSONEncoder
Generic encoder to avoid JSON errors in ModelSpec.__repr__
- default(o: Any) str [source]¶
Implement this method in a subclass such that it returns a serializable object for
o
, or calls the base implementation (to raise aTypeError
).For example, to support arbitrary iterators, you could implement default like this:
def default(self, o): try: iterable = iter(o) except TypeError: pass else: return list(iterable) # Let the base class default method raise the TypeError return JSONEncoder.default(self, o)
Model Bridges¶
Base Model Bridge¶
- class ax.modelbridge.base.BaseGenArgs(search_space: ax.core.search_space.SearchSpace, optimization_config: ax.core.optimization_config.OptimizationConfig | None, pending_observations: dict[str, list[ax.core.observation.ObservationFeatures]], fixed_features: ax.core.observation.ObservationFeatures | None)[source]¶
Bases:
object
- fixed_features: ObservationFeatures | None¶
- optimization_config: OptimizationConfig | None¶
- pending_observations: dict[str, list[ObservationFeatures]]¶
- search_space: SearchSpace¶
- class ax.modelbridge.base.GenResults(observation_features: list[ax.core.observation.ObservationFeatures], weights: list[float], best_observation_features: ax.core.observation.ObservationFeatures | None = None, gen_metadata: dict[str, typing.Any] = <factory>)[source]¶
Bases:
object
- best_observation_features: ObservationFeatures | None = None¶
- observation_features: list[ObservationFeatures]¶
- class ax.modelbridge.base.ModelBridge(search_space: SearchSpace, model: Any, transforms: list[type[Transform]] | None = None, experiment: Experiment | None = None, data: Data | None = None, transform_configs: dict[str, dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None]] | None = None, status_quo_name: str | None = None, status_quo_features: ObservationFeatures | None = None, optimization_config: OptimizationConfig | None = None, expand_model_space: bool = True, fit_out_of_design: bool = False, fit_abandoned: bool = False, fit_tracking_metrics: bool = True, fit_on_init: bool = True)[source]¶
Bases:
ABC
The main object for using models in Ax.
ModelBridge specifies 3 methods for using models:
predict: Make model predictions. This method is not optimized for speed and so should be used primarily for plotting or similar tasks and not inside an optimization loop.
gen: Use the model to generate new candidates.
cross_validate: Do cross validation to assess model predictions.
ModelBridge converts Ax types like Data and Arm to types that are meant to be consumed by the models. The data sent to the model will depend on the implementation of the subclass, which will specify the actual API for external model.
This class also applies a sequence of transforms to the input data and problem specification which can be used to ensure that the external model receives appropriate inputs.
Subclasses will implement what is here referred to as the “terminal transform,” which is a transform that changes types of the data and problem specification.
- cross_validate(cv_training_data: list[Observation], cv_test_points: list[ObservationFeatures], use_posterior_predictive: bool = False) list[ObservationData] [source]¶
Make a set of cross-validation predictions.
- Parameters:
cv_training_data – The training data to use for cross validation.
cv_test_points – The test points at which predictions will be made.
use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).
- Returns:
A list of predictions at the test points.
- feature_importances(metric_name: str) dict[str, float] [source]¶
Computes feature importances for a single metric.
Depending on the type of the model, this method will approach sensitivity analysis (calculating the sensitivity of the metric to changes in the search space’s parameters, a.k.a. features) differently.
For Bayesian optimization models (BoTorch models), this method uses parameter inverse lengthscales to compute normalized feature importances.
NOTE: Currently, this is only implemented for GP models.
- Parameters:
metric_name – Name of metric to compute feature importances for.
- Returns:
A dictionary mapping parameter names to their corresponding feature importances.
- gen(n: int, search_space: SearchSpace | None = None, optimization_config: OptimizationConfig | None = None, pending_observations: dict[str, list[ObservationFeatures]] | None = None, fixed_features: ObservationFeatures | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) GeneratorRun [source]¶
Generate new points from the underlying model according to search_space, optimization_config and other parameters.
- Parameters:
n – Number of points to generate
search_space – Search space
optimization_config – Optimization config
pending_observations – A map from metric name to pending observations for that metric.
fixed_features – An ObservationFeatures object containing any features that should be fixed at specified values during generation.
model_gen_options – A config dictionary that is passed along to the model. See TorchOptConfig for details.
- Returns:
A GeneratorRun object that contains the generated points and other metadata.
- get_training_data() list[Observation] [source]¶
A copy of the (untransformed) data with which the model was fit.
- property model_space: SearchSpace¶
SearchSpace used to fit model.
- predict(observation_features: list[ObservationFeatures]) tuple[dict[str, list[float]], dict[str, dict[str, list[float]]]] [source]¶
Make model predictions (mean and covariance) for the given observation features.
Predictions are made for all outcomes. If an out-of-design observation can successfully be transformed, the predicted value will be returned. Othwerise, we will attempt to find that observation in the training data and return the raw value.
- Parameters:
observation_features – observation features
- Returns:
2-element tuple containing
Dictionary from metric name to list of mean estimates, in same order as observation_features.
Nested dictionary with cov[‘metric1’][‘metric2’] a list of cov(metric1@x, metric2@x) for x in observation_features.
- property status_quo: Observation | None¶
Observation corresponding to status quo, if any.
- property status_quo_data_by_trial: dict[int, ObservationData] | None¶
A map of trial index to the status quo observation data of each trial
- property statuses_to_fit: set[TrialStatus]¶
Statuses to fit the model on.
- property statuses_to_fit_map_metric: set[TrialStatus]¶
Statuses to fit the model on.
- property training_in_design: list[bool]¶
For each observation in the training data, a bool indicating if it is in-design for the model.
- transform_observation_features(observation_features: list[ObservationFeatures]) Any [source]¶
Applies transforms to given observation features and returns them in the model space.
- Parameters:
observation_features – ObservationFeatures to be transformed.
- Returns:
Transformed values. This could be e.g. a torch Tensor, depending on the ModelBridge subclass.
- transform_observations(observations: list[Observation]) Any [source]¶
Applies transforms to given observation features and returns them in the model space.
- Parameters:
observation_features – ObservationFeatures to be transformed.
- Returns:
Transformed values. This could be e.g. a torch Tensor, depending on the ModelBridge subclass.
- update(new_data: Data, experiment: Experiment) None [source]¶
Update the model bridge and the underlying model with new data. This method should be used instead of fit, in cases where the underlying model does not need to be re-fit from scratch, but rather updated.
Note: update expects only new data (obtained since the model initialization or last update) to be passed in, not all data in the experiment.
- Parameters:
new_data – Data from the experiment obtained since the last call to update.
experiment – Experiment, in which this data was obtained.
- ax.modelbridge.base.clamp_observation_features(observation_features: list[ObservationFeatures], search_space: SearchSpace) list[ObservationFeatures] [source]¶
- ax.modelbridge.base.gen_arms(observation_features: list[ObservationFeatures], arms_by_signature: dict[str, Arm] | None = None) tuple[list[Arm], dict[str, dict[str, Any] | None] | None] [source]¶
Converts observation features to a tuple of arms list and candidate metadata dict, where arm signatures are mapped to their respective candidate metadata.
- ax.modelbridge.base.unwrap_observation_data(observation_data: list[ObservationData]) tuple[dict[str, list[float]], dict[str, dict[str, list[float]]]] [source]¶
Converts observation data to the format for model prediction outputs. That format assumes each observation data has the same set of metrics.
Discrete Model Bridge¶
- class ax.modelbridge.discrete.DiscreteModelBridge(search_space: SearchSpace, model: Any, transforms: list[type[Transform]] | None = None, experiment: Experiment | None = None, data: Data | None = None, transform_configs: dict[str, dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None]] | None = None, status_quo_name: str | None = None, status_quo_features: ObservationFeatures | None = None, optimization_config: OptimizationConfig | None = None, expand_model_space: bool = True, fit_out_of_design: bool = False, fit_abandoned: bool = False, fit_tracking_metrics: bool = True, fit_on_init: bool = True)[source]¶
Bases:
ModelBridge
A model bridge for using models based on discrete parameters.
Requires that all parameters have been transformed to ChoiceParameters.
- model: DiscreteModel¶
Random Model Bridge¶
- class ax.modelbridge.random.RandomModelBridge(search_space: SearchSpace, model: Any, transforms: list[type[Transform]] | None = None, experiment: Experiment | None = None, data: Data | None = None, transform_configs: dict[str, dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None]] | None = None, status_quo_name: str | None = None, status_quo_features: ObservationFeatures | None = None, optimization_config: OptimizationConfig | None = None, fit_out_of_design: bool = False, fit_abandoned: bool = False, fit_tracking_metrics: bool = True, fit_on_init: bool = True)[source]
Bases:
ModelBridge
A model bridge for using purely random ‘models’. Data and optimization configs are not required.
This model bridge interfaces with RandomModel.
- model
A RandomModel used to generate candidates (note: this an awkward use of the word ‘model’).
- Parameters:
experiment – Is used to get arm parameters. Is not mutated.
search_space – Search space for fitting the model. Constraints need not be the same ones used in gen. RangeParameter bounds are considered soft and will be expanded to match the range of the data sent in for fitting, if expand_model_space is True.
data – Ax Data.
model – Interface will be specified in subclass. If model requires initialization, that should be done prior to its use here.
transforms – List of uninitialized transform classes. Forward transforms will be applied in this order, and untransforms in the reverse order.
transform_configs – A dictionary from transform name to the transform config dictionary.
status_quo_name – Name of the status quo arm. Can only be used if Data has a single set of ObservationFeatures corresponding to that arm.
status_quo_features – ObservationFeatures to use as status quo. Either this or status_quo_name should be specified, not both.
optimization_config – Optimization config defining how to optimize the model.
fit_out_of_design – If specified, all training data are used. Otherwise, only in design points are used.
fit_abandoned – Whether data for abandoned arms or trials should be included in model training data. If
False
, only non-abandoned points are returned.fit_tracking_metrics – Whether to fit a model for tracking metrics. Setting this to False will improve runtime at the expense of models not being available for predicting tracking metrics. NOTE: This can only be set to False when the optimization config is provided.
fit_on_init – Whether to fit the model on initialization. This can be used to skip model fitting when a fitted model is not needed. To fit the model afterwards, use _process_and_transform_data to get the transformed inputs and call _fit_if_implemented with the transformed inputs.
- model: RandomModel
Torch Model Bridge¶
- class ax.modelbridge.torch.TorchModelBridge(experiment: Experiment, search_space: SearchSpace, data: Data, model: TorchModel, transforms: list[type[Transform]], transform_configs: dict[str, dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None]] | None = None, torch_dtype: dtype | None = None, torch_device: device | None = None, status_quo_name: str | None = None, status_quo_features: ObservationFeatures | None = None, optimization_config: OptimizationConfig | None = None, expand_model_space: bool = True, fit_out_of_design: bool = False, fit_abandoned: bool = False, fit_tracking_metrics: bool = True, fit_on_init: bool = True, default_model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
ModelBridge
A model bridge for using torch-based models.
Specifies an interface that is implemented by TorchModel. In particular, model should have methods fit, predict, and gen. See TorchModel for the API for each of these methods.
Requires that all parameters have been transformed to RangeParameters or FixedParameters with float type and no log scale.
This class converts Ax parameter types to torch tensors before passing them to the model.
- evaluate_acquisition_function(observation_features: list[ObservationFeatures] | list[list[ObservationFeatures]], search_space: SearchSpace | None = None, optimization_config: OptimizationConfig | None = None, pending_observations: dict[str, list[ObservationFeatures]] | None = None, fixed_features: ObservationFeatures | None = None, acq_options: dict[str, Any] | None = None) list[float] [source]¶
Evaluate the acquisition function for given set of observation features.
- Parameters:
observation_features – Either a list or a list of lists of observation features, representing parameterizations, for which to evaluate the acquisition function. If a single list is passed, the acquisition function is evaluated for each observation feature. If a list of lists is passed each element (itself a list of observation features) represents a batch of points for which to evaluate the joint acquisition value.
search_space – Search space for fitting the model.
optimization_config – Optimization config defining how to optimize the model.
pending_observations – A map from metric name to pending observations for that metric.
fixed_features – An ObservationFeatures object containing any features that should be fixed at specified values during generation.
acq_options – Keyword arguments used to contruct the acquisition function.
- Returns:
A list of acquisition function values, in the same order as the input observation features.
- feature_importances(metric_name: str) dict[str, float] [source]¶
Computes feature importances for a single metric.
Depending on the type of the model, this method will approach sensitivity analysis (calculating the sensitivity of the metric to changes in the search space’s parameters, a.k.a. features) differently.
For Bayesian optimization models (BoTorch models), this method uses parameter inverse lengthscales to compute normalized feature importances.
NOTE: Currently, this is only implemented for GP models.
- Parameters:
metric_name – Name of metric to compute feature importances for.
- Returns:
A dictionary mapping parameter names to their corresponding feature importances.
- infer_objective_thresholds(search_space: SearchSpace | None = None, optimization_config: OptimizationConfig | None = None, fixed_features: ObservationFeatures | None = None) list[ObjectiveThreshold] [source]¶
Infer objective thresholds.
This method is only applicable for Multi-Objective optimization problems.
This method uses the model-estimated Pareto frontier over the in-sample points to infer absolute (not relativized) objective thresholds.
This uses a heuristic that sets the objective threshold to be a scaled nadir point, where the nadir point is scaled back based on the range of each objective across the current in-sample Pareto frontier.
- model: TorchModel | None = None¶
- model_best_point(search_space: SearchSpace | None = None, optimization_config: OptimizationConfig | None = None, pending_observations: dict[str, list[ObservationFeatures]] | None = None, fixed_features: ObservationFeatures | None = None, model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None) tuple[Arm, tuple[dict[str, float], dict[str, dict[str, float]] | None] | None] | None [source]¶
- ax.modelbridge.torch.validate_optimization_config(optimization_config: OptimizationConfig, outcomes: list[str]) None [source]¶
Validate optimization config against model fitted outcomes.
- Parameters:
optimization_config – Config to validate.
outcomes – List of metric names w/ valid model fits.
- Raises:
ValueError if –
Relative constraints are found 2. Optimization metrics are not present in model fitted outcomes.
Pairwise Model Bridge¶
- class ax.modelbridge.pairwise.PairwiseModelBridge(experiment: Experiment, search_space: SearchSpace, data: Data, model: TorchModel, transforms: list[type[Transform]], transform_configs: dict[str, dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None]] | None = None, torch_dtype: dtype | None = None, torch_device: device | None = None, status_quo_name: str | None = None, status_quo_features: ObservationFeatures | None = None, optimization_config: OptimizationConfig | None = None, expand_model_space: bool = True, fit_out_of_design: bool = False, fit_abandoned: bool = False, fit_tracking_metrics: bool = True, fit_on_init: bool = True, default_model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
TorchModelBridge
Map Torch Model Bridge¶
- class ax.modelbridge.map_torch.MapTorchModelBridge(experiment: Experiment, search_space: SearchSpace, data: Data, model: TorchModel, transforms: list[type[Transform]], transform_configs: dict[str, dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None]] | None = None, torch_dtype: dtype | None = None, torch_device: device | None = None, status_quo_name: str | None = None, status_quo_features: ObservationFeatures | None = None, optimization_config: OptimizationConfig | None = None, fit_out_of_design: bool = False, fit_on_init: bool = True, fit_abandoned: bool = False, default_model_gen_options: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None, map_data_limit_rows_per_metric: int | None = None, map_data_limit_rows_per_group: int | None = None)[source]¶
Bases:
TorchModelBridge
A model bridge for using torch-based models that fit on MapData. Most of the TorchModelBridge functionality is retained, except that this class should be used in the case where model makes use of map_key values. For example, the use case of fitting a joint surrogate model on (parameters, map_key), while candidate generation is only for parameters.
- property parameters_with_map_keys: list[str]¶
The parameters used for fitting the model, including map_keys.
- property statuses_to_fit_map_metric: set[TrialStatus]¶
Statuses to fit the model on.
Utilities¶
General Utilities¶
- ax.modelbridge.modelbridge_utils.array_to_observation_data(f: ndarray[Any, dtype[_ScalarType_co]], cov: ndarray[Any, dtype[_ScalarType_co]], outcomes: list[str]) list[ObservationData] [source]¶
Convert arrays of model predictions to a list of ObservationData.
- Parameters:
f – An (n x m) array
cov – An (n x m x m) array
outcomes – A list of d outcome names
Returns: A list of n ObservationData
- ax.modelbridge.modelbridge_utils.check_has_multi_objective_and_data(experiment: Experiment, data: Data, optimization_config: OptimizationConfig | None = None) None [source]¶
Raise an error if not using a MultiObjective or if the data is empty.
- ax.modelbridge.modelbridge_utils.extract_objective_thresholds(objective_thresholds: list[ObjectiveThreshold], objective: Objective, outcomes: list[str]) ndarray[Any, dtype[_ScalarType_co]] | None [source]¶
Extracts objective thresholds’ values, in the order of outcomes.
Will return None if no objective thresholds, otherwise the extracted tensor will be the same length as outcomes.
Outcomes that are not part of an objective and the objectives that do no have a corresponding objective threshold will be given a threshold of NaN. We will later infer appropriate threshold values for the objectives that are given a threshold of NaN.
- Parameters:
objective_thresholds – Objective thresholds to extract values from.
objective – The corresponding Objective, for validation purposes.
outcomes – n-length list of names of metrics.
- Returns:
(n,) array of thresholds
- ax.modelbridge.modelbridge_utils.extract_objective_weights(objective: Objective, outcomes: list[str]) ndarray[Any, dtype[_ScalarType_co]] [source]¶
Extract a weights for objectives.
Weights are for a maximization problem.
Give an objective weight to each modeled outcome. Outcomes that are modeled but not part of the objective get weight 0.
In the single metric case, the objective is given either +/- 1, depending on the minimize flag.
In the multiple metric case, each objective is given the input weight, multiplied by the minimize flag.
- Parameters:
objective – Objective to extract weights from.
outcomes – n-length list of names of metrics.
- Returns:
n-length array of weights.
- ax.modelbridge.modelbridge_utils.extract_outcome_constraints(outcome_constraints: list[OutcomeConstraint], outcomes: list[str]) tuple[ndarray, ndarray] | None [source]¶
- ax.modelbridge.modelbridge_utils.extract_parameter_constraints(parameter_constraints: list[ParameterConstraint], param_names: list[str]) tuple[ndarray, ndarray] | None [source]¶
Convert Ax parameter constraints into a tuple of NumPy arrays representing the system of linear inequality constraints.
- Parameters:
parameter_constraints – A list of parameter constraint objects.
param_names – A list of parameter names.
- Returns:
An optional tuple of NumPy arrays (A, b) representing the system of linear inequality constraints A x < b.
- ax.modelbridge.modelbridge_utils.extract_risk_measure(risk_measure: RiskMeasure) RiskMeasureMCObjective [source]¶
Extracts the BoTorch risk measure objective from an Ax RiskMeasure.
- Parameters:
risk_measure – The RiskMeasure object.
- Returns:
The corresponding RiskMeasureMCObjective object.
- ax.modelbridge.modelbridge_utils.extract_robust_digest(search_space: SearchSpace, param_names: list[str]) RobustSearchSpaceDigest | None [source]¶
Extracts the RobustSearchSpaceDigest.
- Parameters:
search_space – A SearchSpace to digest.
param_names – A list of names of the parameters that are used in optimization. If environmental variables are present, these should be the last entries in param_names.
- Returns:
If the search_space is not a RobustSearchSpace, this returns None. Otherwise, it returns a RobustSearchSpaceDigest with entries populated from the properties of the search_space. In particular, this constructs two optional callables, sample_param_perturbations and sample_environmental, that require no inputs and return a num_samples x d-dim array of samples from the corresponding parameter distributions, where d is the number of environmental variables for environmental_sampler and the number of non-environmental parameters in `param_names for distribution_sampler.
- ax.modelbridge.modelbridge_utils.extract_search_space_digest(search_space: SearchSpace, param_names: list[str]) SearchSpaceDigest [source]¶
Extract basic parameter properties from a search space.
This is typically called with the transformed search space and makes certain assumptions regarding the parameters being transformed.
For ChoiceParameters: * The choices are assumed to be numerical. ChoiceToNumericChoice and OrderedChoiceToIntegerRange transforms handle this. * If is_task, its index is added to task_features. * If ordered, its index is added to ordinal_features. * Otherwise, its index is added to categorical_features. * In all cases, the choices are added to discrete_choices. * The minimum and maximum value are added to the bounds. * The target_value is added to target_values.
For RangeParameters: * They’re assumed not to be in the log_scale. The Log transform handles this. * If integer, its index is added to ordinal_features and the choices are added to discrete_choices. * The minimum and maximum value are added to the bounds.
If a parameter is_fidelity: * Its target_value is assumed to be numerical. * The target_value is added to target_values. * Its index is added to fidelity_features.
- ax.modelbridge.modelbridge_utils.feasible_hypervolume(optimization_config: MultiObjectiveOptimizationConfig, values: dict[str, ndarray[Any, dtype[_ScalarType_co]]]) ndarray[Any, dtype[_ScalarType_co]] [source]¶
Compute the feasible hypervolume each iteration.
- Parameters:
optimization_config – Optimization config.
values – Dictionary from metric name to array of value at each iteration (each array is n-dim). If optimization config contains outcome constraints, values for them must be present in values.
Returns: Array of feasible hypervolumes.
- ax.modelbridge.modelbridge_utils.get_fixed_features(fixed_features: ObservationFeatures | None, param_names: list[str]) dict[int, float] | None [source]¶
Reformat a set of fixed_features.
- ax.modelbridge.modelbridge_utils.get_fixed_features_from_experiment(experiment: Experiment) ObservationFeatures [source]¶
- ax.modelbridge.modelbridge_utils.get_pareto_frontier_and_configs(modelbridge: modelbridge_module.torch.TorchModelBridge, observation_features: list[ObservationFeatures], observation_data: list[ObservationData] | None = None, objective_thresholds: TRefPoint | None = None, optimization_config: MultiObjectiveOptimizationConfig | None = None, arm_names: list[str | None] | None = None, use_model_predictions: bool = True) tuple[list[Observation], Tensor, Tensor, Tensor | None] [source]¶
Helper that applies transforms and calls
frontier_evaluator
.Returns the
frontier_evaluator
configs in addition to the Pareto observations.- Parameters:
modelbridge –
Modelbridge
used to predict metrics outcomes.observation_features – Observation features to consider for the Pareto frontier.
observation_data – Data for computing the Pareto front, unless
observation_features
are provided andmodel_predictions is True
.objective_thresholds – Metric values bounding the region of interest in the objective outcome space; used to override objective thresholds specified in
optimization_config
, if necessary.optimization_config – Multi-objective optimization config.
arm_names – Arm names for each observation in
observation_features
.use_model_predictions – If
True
, will use model predictions atobservation_features
to compute Pareto front. IfFalse
, will useobservation_data
directly to compute Pareto front, ignoringobservation_features
.
- Returns: Four-item tuple of:
frontier_observations: Observations of points on the pareto frontier,
f: n x m tensor representation of the Pareto frontier values where n is the length of frontier_observations and m is the number of metrics,
obj_w: m tensor of objective weights,
obj_t: m tensor of objective thresholds corresponding to Y, or None if no objective thresholds used.
- ax.modelbridge.modelbridge_utils.hypervolume(modelbridge: modelbridge_module.torch.TorchModelBridge, observation_features: list[ObservationFeatures], objective_thresholds: TRefPoint | None = None, observation_data: list[ObservationData] | None = None, optimization_config: MultiObjectiveOptimizationConfig | None = None, selected_metrics: list[str] | None = None, use_model_predictions: bool = True) float [source]¶
Helper function that computes (feasible) hypervolume.
- Parameters:
modelbridge – The modelbridge.
observation_features – The observation features for the in-sample arms.
objective_thresholds – The objective thresholds to be used for computing the hypervolume. If None, these are extracted from the optimization config.
observation_data – The observed outcomes for the in-sample arms.
optimization_config – The optimization config specifying the objectives, objectives thresholds, and outcome constraints.
selected_metrics – A list of objective metric names specifying which objectives to use in hypervolume computation. By default, all objectives are used.
use_model_predictions – A boolean indicating whether to use model predictions for determining the in-sample Pareto frontier instead of the raw observed values.
- Returns:
The (feasible) hypervolume.
- ax.modelbridge.modelbridge_utils.observation_data_to_array(outcomes: list[str], observation_data: list[ObservationData]) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] [source]¶
Convert a list of Observation data to arrays.
Any missing mean or covariance values will be returned as NaNs.
- Parameters:
outcomes – A list of m outcomes to extract observation data for.
observation_data – A list of n
ObservationData
objects.
- Returns:
An (n x m) array of mean observations. - cov: An (n x m x m) array of covariance observations.
- Return type:
means
- ax.modelbridge.modelbridge_utils.observation_features_to_array(parameters: list[str], obsf: list[ObservationFeatures]) ndarray[Any, dtype[_ScalarType_co]] [source]¶
Convert a list of Observation features to arrays.
- ax.modelbridge.modelbridge_utils.observed_hypervolume(modelbridge: modelbridge_module.torch.TorchModelBridge, objective_thresholds: TRefPoint | None = None, optimization_config: MultiObjectiveOptimizationConfig | None = None, selected_metrics: list[str] | None = None) float [source]¶
Calculate hypervolume of a pareto frontier based on observed data.
Given observed data, return the hypervolume of the pareto frontier formed from those outcomes.
- Parameters:
modelbridge – Modelbridge that holds previous training data.
objective_thresholds – Point defining the origin of hyperrectangles that can contribute to hypervolume. Note that if this is None, objective_thresholds must be present on the modelbridge.optimization_config.
observation_features – observation features to predict. Model’s training data used by default if unspecified.
optimization_config – Optimization config
selected_metrics – If specified, hypervolume will only be evaluated on the specified subset of metrics. Otherwise, all metrics will be used.
- Returns:
(float) calculated hypervolume.
- ax.modelbridge.modelbridge_utils.observed_pareto_frontier(modelbridge: modelbridge_module.torch.TorchModelBridge, objective_thresholds: TRefPoint | None = None, optimization_config: MultiObjectiveOptimizationConfig | None = None) list[Observation] [source]¶
Generate a pareto frontier based on observed data. Given observed data (sourced from model training data), return points on the Pareto frontier as Observation-s.
- Parameters:
modelbridge –
Modelbridge
that holds previous training data.objective_thresholds – Metric values bounding the region of interest in the objective outcome space; used to override objective thresholds in the optimization config, if needed.
optimization_config – Multi-objective optimization config.
- Returns:
Data representing points on the pareto frontier.
- ax.modelbridge.modelbridge_utils.pareto_frontier(modelbridge: modelbridge_module.torch.TorchModelBridge, observation_features: list[ObservationFeatures], observation_data: list[ObservationData] | None = None, objective_thresholds: TRefPoint | None = None, optimization_config: MultiObjectiveOptimizationConfig | None = None, arm_names: list[str | None] | None = None, use_model_predictions: bool = True) list[Observation] [source]¶
Compute the list of points on the Pareto frontier as Observation-s in the untransformed search space.
- Parameters:
modelbridge –
Modelbridge
used to predict metrics outcomes.observation_features – Observation features to consider for the Pareto frontier.
observation_data – Data for computing the Pareto front, unless
observation_features
are provided andmodel_predictions is True
.objective_thresholds – Metric values bounding the region of interest in the objective outcome space; used to override objective thresholds specified in
optimization_config
, if necessary.optimization_config – Multi-objective optimization config.
arm_names – Arm names for each observation in
observation_features
.use_model_predictions – If
True
, will use model predictions atobservation_features
to compute Pareto front. IfFalse
, will useobservation_data
directly to compute Pareto front, ignoringobservation_features
.
- Returns: Points on the Pareto frontier as Observation-s in order of descending
individual hypervolume if possible.
- ax.modelbridge.modelbridge_utils.parse_observation_features(X: ndarray[Any, dtype[_ScalarType_co]], param_names: list[str], candidate_metadata: list[dict[str, Any] | None] | None = None) list[ObservationFeatures] [source]¶
Re-format raw model-generated candidates into ObservationFeatures.
- Parameters:
param_names – List of param names.
X – Raw np.ndarray of candidate values.
candidate_metadata – Model’s metadata for candidates it produced.
- Returns:
List of candidates, represented as ObservationFeatures.
- ax.modelbridge.modelbridge_utils.pending_observations_as_array_list(pending_observations: dict[str, list[ObservationFeatures]], outcome_names: list[str], param_names: list[str]) list[ndarray[Any, dtype[_ScalarType_co]]] | None [source]¶
Re-format pending observations.
- Parameters:
pending_observations – List of raw numpy pending observations.
outcome_names – List of outcome names.
param_names – List fitted param names.
- Returns:
Filtered pending observations data, by outcome and param names.
- ax.modelbridge.modelbridge_utils.predicted_hypervolume(modelbridge: modelbridge_module.torch.TorchModelBridge, objective_thresholds: TRefPoint | None = None, observation_features: list[ObservationFeatures] | None = None, optimization_config: MultiObjectiveOptimizationConfig | None = None, selected_metrics: list[str] | None = None) float [source]¶
Calculate hypervolume of a pareto frontier based on the posterior means of given observation features.
Given a model and features to evaluate calculate the hypervolume of the pareto frontier formed from their predicted outcomes.
- Parameters:
modelbridge – Modelbridge used to predict metrics outcomes.
objective_thresholds – point defining the origin of hyperrectangles that can contribute to hypervolume.
observation_features – observation features to predict. Model’s training data used by default if unspecified.
optimization_config – Optimization config
selected_metrics – If specified, hypervolume will only be evaluated on the specified subset of metrics. Otherwise, all metrics will be used.
- Returns:
calculated hypervolume.
- ax.modelbridge.modelbridge_utils.predicted_pareto_frontier(modelbridge: modelbridge_module.torch.TorchModelBridge, objective_thresholds: TRefPoint | None = None, observation_features: list[ObservationFeatures] | None = None, optimization_config: MultiObjectiveOptimizationConfig | None = None) list[Observation] [source]¶
Generate a Pareto frontier based on the posterior means of given observation features. Given a model and optionally features to evaluate (will use model training data if not specified), use the model to predict which points lie on the Pareto frontier.
- Parameters:
modelbridge –
Modelbridge
used to predict metrics outcomes.observation_features – Observation features to predict, if provided and
use_model_predictions is True
.objective_thresholds – Metric values bounding the region of interest in the objective outcome space; used to override objective thresholds specified in
optimization_config
, if necessary.optimization_config – Multi-objective optimization config.
- Returns:
Observations representing points on the Pareto frontier.
- ax.modelbridge.modelbridge_utils.process_contextual_datasets(datasets: list[SupervisedDataset], outcomes: list[str], parameter_decomposition: dict[str, list[str]], metric_decomposition: dict[str, list[str]] | None = None) list[ContextualDataset] [source]¶
Contruct a list of ContextualDataset.
- Parameters:
datasets – A list of Dataset objects.
outcomes – The names of the outcomes to extract observations for.
parameter_decomposition – Keys are context names. Values are the lists of parameter names belonging to the context, e.g. {‘context1’: [‘p1_c1’, ‘p2_c1’],’context2’: [‘p1_c2’, ‘p2_c2’]}.
metric_decomposition –
Context breakdown metrics. Keys are context names. Values are the lists of metric names belonging to the context: {
’context1’: [‘m1_c1’, ‘m2_c1’, ‘m3_c1’], ‘context2’: [‘m1_c2’, ‘m2_c2’, ‘m3_c2’],
}
- Returns: A list of ContextualDataset objects. Order generally will not be that of
outcomes.
- ax.modelbridge.modelbridge_utils.transform_callback(param_names: list[str], transforms: MutableMapping[str, Transform]) Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[_ScalarType_co]]] [source]¶
A closure for performing the round trip transformations.
The function rounds points by de-transforming points back into the original space (done by applying transforms in reverse), and then re-transforming them. This function is specifically for points which are formatted as numpy arrays. This function is passed to _model_gen.
- Parameters:
param_names – Names of parameters to transform.
transforms – Ordered set of transforms which were applied to the points.
- Returns:
a function with for performing the roundtrip transform.
- ax.modelbridge.modelbridge_utils.transform_search_space(search_space: SearchSpace, transforms: Iterable[type[Transform]], transform_configs: Mapping[str, Any]) SearchSpace [source]¶
Apply all given transforms to a copy of the SearchSpace iteratively.
- ax.modelbridge.modelbridge_utils.validate_and_apply_final_transform(objective_weights: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], outcome_constraints: tuple[~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]]] | None, linear_constraints: tuple[~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]], ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]]] | None, pending_observations: list[~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]]] | None, objective_thresholds: ~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | None = None, final_transform: ~collections.abc.Callable[[~numpy.ndarray[~typing.Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]]], ~torch.Tensor] = <built-in method tensor of type object>) tuple[Tensor, tuple[Tensor, Tensor] | None, tuple[Tensor, Tensor] | None, list[Tensor] | None, Tensor | None] [source]¶
Prediction Utilities¶
- ax.modelbridge.prediction_utils.predict_at_point(model: ModelBridge, obsf: ObservationFeatures, metric_names: set[str], scalarized_metric_config: list[dict[str, Any]] | None = None) tuple[dict[str, float], dict[str, float]] [source]¶
Make a prediction at a point.
Returns mean and standard deviation in format expected by plotting.
- Parameters:
model – ModelBridge
obsf – ObservationFeatures for which to predict
metric_names – Limit predictions to these metrics.
scalarized_metric_config – An optional list of dicts specifying how to aggregate multiple metrics into a single scalarized metric. For each dict, the key is the name of the new scalarized metric, and the value is a dictionary mapping each metric to its weight. e.g. {“name”: “metric1:agg”, “weight”: {“metric1_c1”: 0.5, “metric1_c2”: 0.5}}.
- Returns:
A tuple containing
Map from metric name to prediction.
Map from metric name to standard error.
- ax.modelbridge.prediction_utils.predict_by_features(model: ModelBridge, label_to_feature_dict: dict[int, ObservationFeatures], metric_names: set[str]) dict[int, dict[str, tuple[float, float]]] [source]¶
Predict for given data points and model.
- Parameters:
model – Model to be used for the prediction
metric_names – Names of the metrics, for which to retrieve predictions.
label_to_feature_dict – Mapping from an int label to a Parameterization. These data points are predicted.
- Returns:
A mapping from an int label to a mapping of metric names to tuples of predicted metric mean and SEM, of form: { trial_index -> { metric_name: ( mean, SEM ) } }.
Cross Validation¶
- class ax.modelbridge.cross_validation.AssessModelFitResult(good_fit_metrics_to_fisher_score: dict[str, float], bad_fit_metrics_to_fisher_score: dict[str, float])[source]¶
Bases:
NamedTuple
Container for model fit assessment results
- class ax.modelbridge.cross_validation.CVResult(observed: Observation, predicted: ObservationData)[source]¶
Bases:
NamedTuple
Container for cross validation results.
- observed: Observation¶
Alias for field number 0
- predicted: ObservationData¶
Alias for field number 1
- ax.modelbridge.cross_validation.assess_model_fit(diagnostics: dict[str, dict[str, float]], significance_level: float = 0.1) AssessModelFitResult [source]¶
Assess model fit for given diagnostics results.
It determines if a model fit is good or bad based on Fisher exact test p
- Parameters:
diagnostics – Output of compute_diagnostics
- Returns:
Two dictionaries, one for good metrics, one for bad metrics, each mapping metric name to p-value
- ax.modelbridge.cross_validation.compute_diagnostics(result: list[CVResult]) dict[str, dict[str, float]] [source]¶
Computes diagnostics for given cross validation results.
It provides a dictionary with values for the following diagnostics, for each metric:
‘Mean prediction CI’: the average width of the CIs at each of the CV predictions, relative to the observed mean.
‘MAPE’: mean absolute percentage error of the estimated mean relative to the observed mean.
‘wMAPE’: Weighted mean absolute percentage error.
‘Total raw effect’: the multiple change from the smallest observed mean to the largest observed mean, i.e. (max - min) / min.
‘Correlation coefficient’: the Pearson correlation of the estimated and observed means.
‘Rank correlation’: the Spearman correlation of the estimated and observed means.
‘Fisher exact test p’: we test if the model is able to distinguish the bottom half of the observations from the top half, using Fisher’s exact test and the observed/estimated means. A low p value indicates that the model has some ability to identify good arms. A high p value indicates that the model cannot identify arms better than chance, or that the observations are too noisy to be able to tell.
Each of these is returned as a dictionary from metric name to value for that metric.
- Parameters:
result – Output of cross_validate
- Returns:
A dictionary keyed by diagnostic name with results as described above.
- ax.modelbridge.cross_validation.compute_model_fit_metrics_from_modelbridge(model_bridge: ModelBridge, fit_metrics_dict: dict[str, ModelFitMetricProtocol] | None = None, generalization: bool = False, untransform: bool = False) dict[str, dict[str, float]] [source]¶
Computes the model fit metrics given a ModelBridge and an Experiment.
- Parameters:
model_bridge – The ModelBridge for which to compute the model fit metrics.
experiment – The experiment with whose data to compute the metrics if generalization == False. Otherwise, the data is taken from the ModelBridge.
fit_metrics_dict – An optional dictionary with model fit metric functions, i.e. a ModelFitMetricProtocol, as values and their names as keys.
generalization – Boolean indicating whether to compute the generalization metrics on cross-validation data or on the training data. The latter helps diagnose problems with model training, rather than generalization.
untransform – Boolean indicating whether to untransform model predictions before calcualting the model fit metrics. False by default as models are trained in transformed space and model fit should be evaluated in transformed space.
- Returns:
A nested dictionary mapping from the model fit metric names and the experimental metric names to the values of the model fit metrics.
Example for an imaginary AutoML experiment that seeks to minimize the test error after training an expensive model, with respect to hyper-parameters:
``` model_fit_dict = compute_model_fit_metrics_from_modelbridge(model_bridge, exp) model_fit_dict[“coefficient_of_determination”][“test error”] =
coefficient of determination of the test error predictions
- ax.modelbridge.cross_validation.cross_validate(model: ModelBridge, folds: int = -1, test_selector: Callable | None = None, untransform: bool = True, use_posterior_predictive: bool = False) list[CVResult] [source]¶
Cross validation for model predictions.
Splits the model’s training data into train/test folds and makes out-of-sample predictions on the test folds.
Train/test splits are made based on arm names, so that repeated observations of a arm will always be in the train or test set together.
The test set can be limited to a specific set of observations by passing in a test_selector callable. This function should take in an Observation and return a boolean indiciating if it should be used in the test set or not. For example, we can limit the test set to arms with trial 0 with test_selector = lambda obs: obs.features.trial_index == 0 If not provided, all observations will be available for the test set.
- Parameters:
model – Fitted model (ModelBridge) to cross validate.
folds – Number of folds. Use -1 for leave-one-out, otherwise will be k-fold.
test_selector – Function for selecting observations for the test set.
untransform – Whether to untransform the model predictions before cross validating. Models are trained on transformed data, and candidate generation is performed in the transformed space. Computing the model quality metric based on the cross-validation results in the untransformed space may not be representative of the model that is actually used for candidate generation in case of non-invertible transforms, e.g., Winsorize or LogY. While the model in the transformed space may not be representative of the original data in regions where outliers have been removed, we have found it to better reflect the how good the model used for candidate generation actually is.
use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise). Note: we should reconsider how we compute cross-validation and model fit metrics where there is non- Gaussian noise.
- Returns:
A CVResult for each observation in the training data.
- ax.modelbridge.cross_validation.cross_validate_by_trial(model: ModelBridge, trial: int = -1, use_posterior_predictive: bool = False) list[CVResult] [source]¶
Cross validation for model predictions on a particular trial.
Uses all of the data up until the specified trial to predict each of the arms that was launched in that trial. Defaults to the last trial.
- Parameters:
model – Fitted model (ModelBridge) to cross validate.
trial – Trial for which predictions are evaluated.
use_posterior_predictive – A boolean indicating if the predictions should be from the posterior predictive (i.e. including observation noise).
- Returns:
A CVResult for each observation in the training data.
- ax.modelbridge.cross_validation.get_fit_and_std_quality_and_generalization_dict(fitted_model_bridge: ModelBridge) dict[str, float | None] [source]¶
Get stats and gen from a fitted ModelBridge for analytics purposes.
- ax.modelbridge.cross_validation.has_good_opt_config_model_fit(optimization_config: OptimizationConfig, assess_model_fit_result: AssessModelFitResult) bool [source]¶
Assess model fit for given diagnostics results across the optimization config metrics
Bad fit criteria: Any objective metrics are poorly fit based on the Fisher exact test p (see assess_model_fit())
TODO[]: Incl. outcome constraints in assessment
- Parameters:
optimization_config – Objective/Outcome constraint metrics to assess
diagnostics – Output of compute_diagnostics
- Returns:
Two dictionaries, one for good metrics, one for bad metrics, each mapping metric name to p-value
Model Selection¶
- class ax.modelbridge.best_model_selector.BestModelSelector[source]¶
-
- abstract best_model(model_specs: list[ModelSpec]) ModelSpec [source]¶
Return the best
ModelSpec
based on some criteria.NOTE: The returned
ModelSpec
may be a different object than what was provided in the original list. It may be possible to clone and modify the originalModelSpec
to produce one that performs better.
- class ax.modelbridge.best_model_selector.ReductionCriterion(value)[source]¶
Bases:
Enum
An enum for callables that are used for aggregating diagnostics over metrics and selecting the best diagnostic in
SingleDiagnosticBestModelSelector
.NOTE: This is used to ensure serializability of the callables.
- MAX: Callable[[ndarray | list[float] | list[ndarray]], ndarray[Any, dtype[_ScalarType_co]]] = functools.partial(<function max>)¶
- class ax.modelbridge.best_model_selector.SingleDiagnosticBestModelSelector(diagnostic: str, metric_aggregation: ReductionCriterion, criterion: ReductionCriterion, model_cv_kwargs: dict[str, Any] | None = None)[source]¶
Bases:
BestModelSelector
Choose the best model using a single cross-validation diagnostic.
The input is a list of
ModelSpec
, each corresponding to one model. The specified diagnostic is extracted from each of the models, its values (each of which corresponds to a separate metric) are aggregated with the aggregation function, the best one is determined with the criterion, and the index of the best diagnostic result is returned.Example
- ::
- s = SingleDiagnosticBestModelSelector(
diagnostic=’Fisher exact test p’, metric_aggregation=ReductionCriterion.MEAN, criterion=ReductionCriterion.MIN, model_cv_kwargs={“untransform”: False},
) best_model = s.best_model(model_specs=model_specs)
- Parameters:
diagnostic – The name of the diagnostic to use, which should be a key in
CVDiagnostic
.metric_aggregation –
ReductionCriterion
applied to the values of the diagnostic for a single model to produce a single number.criterion –
ReductionCriterion
used to determine which of the (aggregated) diagnostics is the best.model_cv_kwargs – Optional dictionary of kwargs to pass in while computing the cross validation diagnostics.
Dispatch Utilities¶
- ax.modelbridge.dispatch_utils.calculate_num_initialization_trials(num_tunable_parameters: int, num_trials: int | None, use_batch_trials: bool) int [source]¶
- Applies rules from high to low priority
1 for batch trials.
At least 5
At most 1/5th of num_trials.
Twice the number of tunable parameters
- ax.modelbridge.dispatch_utils.choose_generation_strategy(search_space: SearchSpace, *, use_batch_trials: bool = False, enforce_sequential_optimization: bool = True, random_seed: int | None = None, torch_device: device | None = None, no_winsorization: bool = False, winsorization_config: None | WinsorizationConfig | dict[str, WinsorizationConfig] = None, derelativize_with_raw_status_quo: bool = False, no_bayesian_optimization: bool | None = None, force_random_search: bool = False, num_trials: int | None = None, num_initialization_trials: int | None = None, num_completed_initialization_trials: int = 0, max_initialization_trials: int | None = None, min_sobol_trials_observed: int | None = None, max_parallelism_cap: int | None = None, max_parallelism_override: int | None = None, optimization_config: OptimizationConfig | None = None, should_deduplicate: bool = False, use_saasbo: bool = False, verbose: bool | None = None, disable_progbar: bool | None = None, jit_compile: bool | None = None, experiment: Experiment | None = None, suggested_model_override: ModelRegistryBase | None = None, fit_out_of_design: bool = False) GenerationStrategy [source]¶
Select an appropriate generation strategy based on the properties of the search space and expected settings of the experiment, such as number of arms per trial, optimization algorithm settings, expected number of trials in the experiment, etc.
- Parameters:
search_space – SearchSpace, based on the properties of which to select the generation strategy.
use_batch_trials – Whether this generation strategy will be used to generate batched trials instead of 1-arm trials.
enforce_sequential_optimization – Whether to enforce that 1) the generation strategy needs to be updated with
min_trials_observed
observations for a given generation step before proceeding to the next one and 2) maximum number of trials running at once (max_parallelism) if enforced for the BayesOpt step. NOTE:max_parallelism_override
andmax_parallelism_cap
settings will still take their effect on max parallelism even ifenforce_sequential_optimization=False
, so if those settings are specified, max parallelism will be enforced.random_seed – Fixed random seed for the Sobol generator.
torch_device – The device to use for generation steps implemented in PyTorch (e.g. via BoTorch). Some generation steps (in particular EHVI-based ones for multi-objective optimization) can be sped up by running candidate generation on the GPU. If not specified, uses the default torch device (usually the CPU).
no_winsorization – Whether to apply the winsorization transform prior to applying other transforms for fitting the BoTorch model.
winsorization_config – Explicit winsorization settings, if winsorizing. Usually only upper_quantile_margin is set when minimizing, and only lower_quantile_margin when maximizing.
derelativize_with_raw_status_quo – Whether to derelativize using the raw status quo values in any transforms. This argument is primarily to allow automatic Winsorization when relative constraints are present. Note: automatic Winsorization will fail if this is set to False (or unset) and there are relative constraints present.
no_bayesian_optimization – Deprecated. Use force_random_search.
force_random_search – If True, quasi-random generation strategy will be used rather than Bayesian optimization.
num_trials – Total number of trials in the optimization, if known in advance.
num_initialization_trials – Specific number of initialization trials, if wanted. Typically, initialization trials are generated quasi-randomly.
max_initialization_trials – If
num_initialization_trials
unspecified, it will be determined automatically. This arg provides a cap on that automatically determined number.num_completed_initialization_trials – The final calculated number of initialization trials is reduced by this number. This is useful when warm-starting an experiment, to specify what number of completed trials can be used to satisfy the initialization_trial requirement.
min_sobol_trials_observed – Minimum number of Sobol trials that must be observed before proceeding to the next generation step. Defaults to ceil(num_initialization_trials / 2).
max_parallelism_cap – Integer cap on parallelism in this generation strategy. If specified,
max_parallelism
setting in each generation step will be set to the minimum of the default setting for that step and the value of this cap.max_parallelism_cap
is meant to just be a hard limit on parallelism (e.g. to avoid overloading machine(s) that evaluate the experiment trials). Specify only if not specifyingmax_parallelism_override
.max_parallelism_override – Integer, with which to override the default max parallelism setting for all steps in the generation strategy returned from this function. Each generation step has a
max_parallelism
value, which restricts how many trials can run simultaneously during a given generation step. By default, the parallelism setting is chosen as appropriate for the model in a given generation step. Ifmax_parallelism_override
is -1, no max parallelism will be enforced for any step of the generation strategy. Be aware that parallelism is limited to improve performance of Bayesian optimization, so only disable its limiting if necessary.optimization_config – used to infer whether to use MOO and will be passed in to
Winsorize
via itstransform_config
in order to determine default winsorization behavior when necessary.should_deduplicate – Whether to deduplicate the parameters of proposed arms against those of previous arms via rejection sampling. If this is True, the generation strategy will discard generator runs produced from the generation step that has should_deduplicate=True if they contain arms already present on the experiment and replace them with new generator runs. If no generator run with entirely unique arms could be produced in 5 attempts, a GenerationStrategyRepeatedPoints error will be raised, as we assume that the optimization converged when the model can no longer suggest unique arms.
use_saasbo – Whether to use SAAS prior for any GPEI generation steps.
verbose – Whether GP model should produce verbose logs. If not
None
, its value gets added tomodel_kwargs
duringgeneration_strategy
construction. Defaults toTrue
for SAASBO, elseNone
. Verbose outputs are currently only available for SAASBO, so ifverbose is not None
for a different model type, it will be overridden toNone
with a warning.disable_progbar – Whether GP model should produce a progress bar. If not
None
, its value gets added tomodel_kwargs
duringgeneration_strategy
construction. Defaults toTrue
for SAASBO, elseNone
. Progress bars are currently only available for SAASBO, so ifdisable_probar is not None
for a different model type, it will be overridden toNone
with a warning.jit_compile – Whether to use jit compilation in Pyro when SAASBO is used.
experiment – If specified,
_experiment
attribute of the generation strategy will be set to this experiment (useful for associating a generation strategy with a given experiment before it’s first used togen
with that experiment). Can also provide optimization_config if it is not provided as an arg to this function.suggested_model_override – If specified, this model will be used for the GP step and automatic selection will be skipped.
fit_out_of_design – Whether to include out-of-design points in the model.
Transforms¶
ax.modelbridge.transforms.deprecated_transform_mixin¶
ax.modelbridge.transforms.base¶
- class ax.modelbridge.transforms.base.Transform(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
object
Defines the API for a transform that is applied to search_space, observation_features, observation_data, and optimization_config.
Transforms are used to adapt the search space and data into the types and structures expected by the model. When Transforms are used (for instance, in ModelBridge), it is always assumed that they may potentially mutate the transformed object in-place.
Forward transforms are defined for all four of those quantities. Reverse transforms are defined for observation_data and observation.
The forward transform for observation features must accept a partial observation with not all features recorded.
Forward and reverse transforms for observation data accept a list of observation features as an input, but they will not be mutated.
The forward transform for optimization config accepts the modelbridge and fixed features as inputs, but they will not be mutated.
This class provides an identify transform.
- config: TConfig¶
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- transform_observations(observations: list[Observation]) list[Observation] [source]¶
Transform observations.
Typically done in place. By default, the effort is split into separate transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: transformed observations.
- transform_optimization_config(optimization_config: OptimizationConfig, modelbridge: modelbridge_module.base.ModelBridge | None = None, fixed_features: ObservationFeatures | None = None) OptimizationConfig [source]¶
Transform optimization config.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
optimization_config – The optimization config
Returns: transformed optimization config.
- transform_search_space(search_space: SearchSpace) SearchSpace [source]¶
Transform search space.
The transforms are typically done in-place. This calls two private methods, _transform_search_space, which transforms the core search space attributes, and _transform_parameter_distributions, which transforms the distributions when using a RobustSearchSpace.
- Parameters:
search_space – The search space
Returns: transformed search space.
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
- untransform_observations(observations: list[Observation]) list[Observation] [source]¶
Untransform observations.
Typically done in place. By default, the effort is split into separate backwards transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: untransformed observations.
- untransform_outcome_constraints(outcome_constraints: list[OutcomeConstraint], fixed_features: ObservationFeatures | None = None) list[OutcomeConstraint] [source]¶
Untransform outcome constraints.
If outcome constraints are modified in transform_optimization_config, this method should reverse the portion of that transformation that was applied to the outcome constraints.
ax.modelbridge.transforms.cast¶
- class ax.modelbridge.transforms.cast.Cast(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Cast each param value to the respective parameter’s type/format and to a flattened version of the hierarchical search space, if applicable.
This is a default transform that should run across all models.
NOTE: In case where searh space is hierarchical and this transform is configured to flatten it:
All calls to Cast.transform_… transform Ax objects defined in terms of hierarchical search space, to their definitions in terms of flattened search space.
All calls to Cast.untransform_… cast Ax objects back to a hierarchical search space.
The hierarchical search space is seen as the “original” search space, and the flattened search space –– as “transformed”.
Transform is done in-place for casting types, but objects are copied during flattening of- and casting to the hierarchical search space.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features by adding parameter values that were removed during casting of observation features to hierarchical search space.
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features by casting parameter values to their expected types and removing parameter values that are not applicable given the values of other parameters and the hierarchical structure of the search space.
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.choice_encode¶
- class ax.modelbridge.transforms.choice_encode.ChoiceEncode(*args: Any, **kwargs: Any)[source]¶
Bases:
DeprecatedTransformMixin
,ChoiceToNumericChoice
Deprecated alias for ChoiceToNumericChoice.
- class ax.modelbridge.transforms.choice_encode.ChoiceToNumericChoice(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Convert general ChoiceParameters to integer or float ChoiceParameters.
If the parameter type is numeric (int, float) and the parameter is ordered, then the values are normalized to the unit interval while retaining relative spacing. If the parameter type is unordered (categorical) or ordered but non-numeric, this transform uses an integer encoding to 0, 1, …, n_choices - 1. The resulting choice parameter will be considered ordered iff the original parameter is.
In the inverse transform, parameters will be mapped back onto the original domain.
This transform does not transform task parameters (use TaskChoiceToIntTaskChoice for this).
Note that this behavior is different from that of OrderedChoiceToIntegerRange, which transforms (ordered) ChoiceParameters to integer RangeParameters (rather than ChoiceParameters).
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
- class ax.modelbridge.transforms.choice_encode.OrderedChoiceEncode(*args: Any, **kwargs: Any)[source]¶
Bases:
DeprecatedTransformMixin
,OrderedChoiceToIntegerRange
Deprecated alias for OrderedChoiceToIntegerRange.
- class ax.modelbridge.transforms.choice_encode.OrderedChoiceToIntegerRange(search_space: SearchSpace, observations: list[Observation], modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
ChoiceToNumericChoice
Convert ordered ChoiceParameters to integer RangeParameters.
Parameters will be transformed to an integer RangeParameters, mapped from the original choice domain to a contiguous range 0, 1, …, n_choices - 1 of integers. Does not transform task parameters.
In the inverse transform, parameters will be mapped back onto the original domain.
In order to encode all ChoiceParameters (not just ordered ChoiceParameters), use ChoiceToNumericChoice instead.
Transform is done in-place.
- ax.modelbridge.transforms.choice_encode.transform_choice_values(p: ChoiceParameter) tuple[ndarray[Any, dtype[_ScalarType_co]], ParameterType] [source]¶
Transforms the choice values and returns the new parameter type.
If the choices were numeric (int or float) and ordered, then they’re cast to float and rescaled to [0, 1]. Otherwise, they’re cast to integers 0, 1, …, n_choices - 1.
ax.modelbridge.transforms.convert_metric_names¶
- class ax.modelbridge.transforms.convert_metric_names.ConvertMetricNames(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Convert all metric names to canonical name as specified on a multi_type_experiment.
For example, a multi-type experiment may have an offline simulator which attempts to approximate observations from some online system. We want to map the offline metric names to the corresponding online ones so the model can associate them.
This is done by replacing metric names in the data with the corresponding online metric names.
In the inverse transform, data will be mapped back onto the original metric names. By default, this transform is turned off. It can be enabled by passing the “perform_untransform” flag to the config.
- untransform_observations(observations: list[Observation]) list[Observation] [source]¶
Untransform observations.
Typically done in place. By default, the effort is split into separate backwards transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: untransformed observations.
- ax.modelbridge.transforms.convert_metric_names.convert_mt_observations(observations: list[Observation], experiment: MultiTypeExperiment) list[Observation] [source]¶
Apply ConvertMetricNames transform to observations for a MT experiment.
- ax.modelbridge.transforms.convert_metric_names.tconfig_from_mt_experiment(experiment: MultiTypeExperiment) dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] [source]¶
Generate the TConfig for this transform given a multi_type_experiment.
- Parameters:
experiment – The experiment from which to generate the config.
- Returns:
The transform config to pass into the ConvertMetricNames constructor.
ax.modelbridge.transforms.derelativize¶
- class ax.modelbridge.transforms.derelativize.Derelativize(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
Transform
Changes relative constraints to not-relative constraints using a plug-in estimate of the status quo value.
If status quo is in-design, uses model estimate at status quo. If not, uses raw observation at status quo.
Will raise an error if status quo is in-design and model fails to predict for it, unless the flag “use_raw_status_quo” is set to True in the transform config, in which case it will fall back to using the observed value in the training data.
Transform is done in-place.
- transform_optimization_config(optimization_config: OptimizationConfig, modelbridge: modelbridge_module.base.ModelBridge | None = None, fixed_features: ObservationFeatures | None = None) OptimizationConfig [source]¶
Transform optimization config.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
optimization_config – The optimization config
Returns: transformed optimization config.
- untransform_outcome_constraints(outcome_constraints: list[OutcomeConstraint], fixed_features: ObservationFeatures | None = None) list[OutcomeConstraint] [source]¶
Untransform outcome constraints.
If outcome constraints are modified in transform_optimization_config, this method should reverse the portion of that transformation that was applied to the outcome constraints.
- ax.modelbridge.transforms.derelativize.derelativize_bound(bound: float, sq_val: float) float [source]¶
Derelativize a bound. Note that a positive bound makes the derelativized bound larger than sq_val, i.e. bound < sq_val, regardless of the sign of sq_val.
- Parameters:
bound – The bound to derelativize in percentage terms, so a bound of 1 corresponds to a 1% increase compared to the status quo.
sq_val – The status quo value.
- Returns:
The derelativized bound.
Examples
>>> derelativize_bound(bound=1.0, sq_val=10.0) 10.1 >>> derelativize_bound(bound=-1.0, sq_val=10.0) 9.9 >>> derelativize_bound(bound=1.0, sq_val=-10.0) -9.9 >>> derelativize_bound(bound=-1.0, sq_val=-10.0) -10.1
ax.modelbridge.transforms.fill_missing_parameters¶
- class ax.modelbridge.transforms.fill_missing_parameters.FillMissingParameters(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
If a parameter is missing from an arm, fill it with the value from the dict given in the config.
- Config supports two options.
- fill_values: a dict of {parameter_name: value} to fill in for missing
parameters. Required.
- fill_None: a boolean indicating whether to fill in None values. Default
is True. If False, parameters specified as None will remain None, and only parameters absent altogether will be filled.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
ax.modelbridge.transforms.int_range_to_choice¶
- class ax.modelbridge.transforms.int_range_to_choice.IntRangeToChoice(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Convert a RangeParameter of type int to a ordered ChoiceParameter.
Transform is done in-place.
ax.modelbridge.transforms.int_to_float¶
- class ax.modelbridge.transforms.int_to_float.IntToFloat(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Convert a RangeParameter of type int to type float.
Uses either randomized_rounding or default python rounding, depending on ‘rounding’ flag.
The min_choices config can be used to transform only the parameters with cardinality greater than or equal to min_choices; with the exception of log_scale parameters, which are always transformed.
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
- class ax.modelbridge.transforms.int_to_float.LogIntToFloat(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
IntToFloat
Convert a log-scale RangeParameter of type int to type float.
The behavior of this transform mirrors
IntToFloat
with the key difference being that it only operates on log-scale parameters.
ax.modelbridge.transforms.ivw¶
- class ax.modelbridge.transforms.ivw.IVW(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
Transform
If an observation data contains multiple observations of a metric, they are combined using inverse variance weighting.
- ax.modelbridge.transforms.ivw.ivw_metric_merge(obsd: ObservationData, conflicting_noiseless: str = 'warn') ObservationData [source]¶
Merge multiple observations of a metric with inverse variance weighting.
Correctly updates the covariance of the new merged estimates: ybar1 = Sum_i w_i * y_i ybar2 = Sum_j w_j * y_j cov[ybar1, ybar2] = Sum_i Sum_j w_i * w_j * cov[y_i, y_j]
w_i will be infinity if any variance is 0. If one variance is 0., then the IVW estimate is the corresponding mean. If there are multiple measurements with 0 variance but means are all the same, then IVW estimate is that mean. If there are multiple measurements and means differ, behavior depends on argument conflicting_noiseless. “ignore” and “warn” will use the first of the measurements as the IVW estimate. “warn” will additionally log a warning. “raise” will raise an exception.
- Parameters:
obsd – An ObservationData object
conflicting_noiseless – “warn”, “ignore”, or “raise”
ax.modelbridge.transforms.log¶
- class ax.modelbridge.transforms.log.Log(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Apply log base 10 to a float RangeParameter domain.
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.log_y¶
- class ax.modelbridge.transforms.log_y.LogY(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: base_modelbridge.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
Transform
Apply (natural) log-transform to Y.
This essentially means that we are model the observations as log-normally distributed. If config specifies match_ci_width=True, use a matching procedure based on the width of the CIs, otherwise (the default), use the delta method,
Transform is applied only for the metrics specified in the transform config. Transform is done in-place.
NOTE: If the observation noise is not provided, we simply log-transform the mean as if the observation noise was zero. This can be inaccurate when the unknown observation noise is large.
- transform_optimization_config(optimization_config: OptimizationConfig, modelbridge: base_modelbridge.ModelBridge | None = None, fixed_features: ObservationFeatures | None = None) OptimizationConfig [source]¶
Transform optimization config.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
optimization_config – The optimization config
Returns: transformed optimization config.
- untransform_outcome_constraints(outcome_constraints: list[OutcomeConstraint], fixed_features: ObservationFeatures | None = None) list[OutcomeConstraint] [source]¶
Untransform outcome constraints.
If outcome constraints are modified in transform_optimization_config, this method should reverse the portion of that transformation that was applied to the outcome constraints.
- ax.modelbridge.transforms.log_y.lognorm_to_norm(mu_ln: ndarray[Any, dtype[_ScalarType_co]], Cov_ln: ndarray[Any, dtype[_ScalarType_co]]) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] [source]¶
Compute mean and covariance of a MVN from those of the associated log-MVN.
If Y is log-normal with mean mu_ln and covariance Cov_ln, then X ~ N(mu_n, Cov_n) with
Cov_n_{ij} = log(1 + Cov_ln_{ij} / (mu_ln_{i} * mu_n_{j})) mu_n_{i} = log(mu_ln_{i}) - 0.5 * log(1 + Cov_ln_{ii} / mu_ln_{i}**2)
NOTE: If the observation noise is not provided, we simply log-transform the mean as if the observation noise was zero. This can be inaccurate when the unknown observation noise is large.
- ax.modelbridge.transforms.log_y.match_ci_width(mean: ndarray[Any, dtype[_ScalarType_co]], variance: ndarray[Any, dtype[_ScalarType_co]], transform: Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[_ScalarType_co]]], level: float = 0.95) ndarray[Any, dtype[_ScalarType_co]] [source]¶
- ax.modelbridge.transforms.log_y.norm_to_lognorm(mu_n: ndarray[Any, dtype[_ScalarType_co]], Cov_n: ndarray[Any, dtype[_ScalarType_co]]) tuple[ndarray[Any, dtype[_ScalarType_co]], ndarray[Any, dtype[_ScalarType_co]]] [source]¶
Compute mean and covariance of a log-MVN from its MVN sufficient statistics.
If X ~ N(mu_n, Cov_n) and Y = exp(X), then Y is log-normal with
mu_ln_{i} = exp(mu_n_{i}) + 0.5 * Cov_n_{ii} Cov_ln_{ij} = exp(mu_n_{i} + mu_n_{j} + 0.5 * (Cov_n_{ii} + Cov_n_{jj})) * (exp(Cov_n_{ij}) - 1)
NOTE: If the observation noise is not provided, we simply take the exponent of the mean as if the observation noise was zero. This can be inaccurate when the unknown observation noise is large.
ax.modelbridge.transforms.logit¶
- class ax.modelbridge.transforms.logit.Logit(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Apply logit transform to a float RangeParameter domain.
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.map_unit_x¶
- class ax.modelbridge.transforms.map_unit_x.MapUnitX(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
UnitX
A UnitX transform for map parameters in observation_features, identified as those that are not part of the search space. Since they are not part of the search space, the bounds are inferred from the set of observation features. Only observation features are transformed; all other objects undergo identity transform.
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform if the parameter exists in the observation feature. Note the extra existence check from UnitX.untransform_observation_features because when map key features are used, they may not exist after generation or best point computations.
ax.modelbridge.transforms.merge_repeated_measurements¶
- class ax.modelbridge.transforms.merge_repeated_measurements.MergeRepeatedMeasurements(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Merge repeated measurements for to obtain one observation per arm.
Repeated measurements are merged via inverse variance weighting (e.g. over different trials). This intentionally ignores the trial index and assumes stationarity.
TODO: Support inverse variance weighting correlated outcomes (full covariance).
Note: this is not reversible.
- transform_observations(observations: list[Observation]) list[Observation] [source]¶
Transform observations.
Typically done in place. By default, the effort is split into separate transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: transformed observations.
ax.modelbridge.transforms.metrics_as_task¶
- class ax.modelbridge.transforms.metrics_as_task.MetricsAsTask(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Convert metrics to a task parameter.
For each metric to be used as a task, the config must specify a list of the target metrics for that particular task metric. So,
- config = {
- ‘metric_task_map’: {
‘metric1’: [‘metric2’, ‘metric3’], ‘metric2’: [‘metric3’],
}
}
means that metric2 will be given additional task observations of metric1, and metric3 will be given additional task observations of both metric1 and metric2. Note here that metric2 and metric3 are the target tasks, and this map is from base tasks to target tasks.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
If transforming features without data, map them to the target.
- transform_observations(observations: list[Observation]) list[Observation] [source]¶
Transform observations.
Typically done in place. By default, the effort is split into separate transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: transformed observations.
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
- untransform_observations(observations: list[Observation]) list[Observation] [source]¶
Untransform observations.
Typically done in place. By default, the effort is split into separate backwards transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: untransformed observations.
ax.modelbridge.transforms.one_hot¶
- class ax.modelbridge.transforms.one_hot.OneHot(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Convert categorical parameters (unordered ChoiceParameters) to one-hot-encoded parameters.
Does not convert task parameters.
Parameters will be one-hot-encoded, yielding a set of RangeParameters, of type float, on [0, 1]. If there are two values, one single RangeParameter will be yielded, otherwise there will be a new RangeParameter for each ChoiceParameter value.
In the reverse transform, floats can be converted to a one-hot encoded vector using one of two methods:
- Strict rounding: Choose the maximum value. With levels [‘a’, ‘b’, ‘c’] and
float values [0.2, 0.4, 0.3], the restored parameter would be set to ‘b’. Ties are broken randomly, so values [0.2, 0.4, 0.4] is randomly set to ‘b’ or ‘c’.
- Randomized rounding: Sample from the distribution. Float values
[0.2, 0.4, 0.3] are transformed to ‘a’ w.p. 0.2/0.9, ‘b’ w.p. 0.4/0.9, or ‘c’ w.p. 0.3/0.9.
Type of rounding can be set using transform_config[‘rounding’] to either ‘strict’ or ‘randomized’. Defaults to strict.
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.power_transform_y¶
- class ax.modelbridge.transforms.power_transform_y.PowerTransformY(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
Transform
Transform the values to look as normally distributed as possible.
This fits a power transform to the data with the goal of making the transformed values look as normally distributed as possible. We use Yeo-Johnson (https://www.stat.umn.edu/arc/yjpower.pdf), which can handle both positive and negative values.
While the transform seems to be quite robust, it probably makes sense to apply a bit of winsorization and also standardize the inputs before applying the power transform. The power transform will automatically standardize the data so the data will remain standardized.
The transform can’t be inverted for all values, so we apply clipping to move values to the image of the transform. This behavior can be controlled via the clip_mean setting.
- transform_optimization_config(optimization_config: OptimizationConfig, modelbridge: modelbridge_module.base.ModelBridge | None = None, fixed_features: ObservationFeatures | None = None) OptimizationConfig [source]¶
Transform optimization config.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
optimization_config – The optimization config
Returns: transformed optimization config.
- untransform_outcome_constraints(outcome_constraints: list[OutcomeConstraint], fixed_features: ObservationFeatures | None = None) list[OutcomeConstraint] [source]¶
Untransform outcome constraints.
If outcome constraints are modified in transform_optimization_config, this method should reverse the portion of that transformation that was applied to the outcome constraints.
ax.modelbridge.transforms.remove_fixed¶
- class ax.modelbridge.transforms.remove_fixed.RemoveFixed(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Remove fixed parameters.
Fixed parameters should not be included in the SearchSpace. This transform removes these parameters, leaving only tunable parameters.
Transform is done in-place for observation features.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.metadata_to_float¶
- class ax.modelbridge.transforms.metadata_to_float.MetadataToFloat(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: 'modelbridge_module.base.ModelBridge' | None = None, config: TConfig | None = None)[source]¶
Bases:
Transform
This transform converts metadata from observation features into range (float) parameters for a search space.
It allows the user to specify the config with parameters as the key, where each entry maps a metadata key to a dictionary of keyword arguments for the corresponding RangeParameter constructor.
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.map_key_to_float¶
- class ax.modelbridge.transforms.map_key_to_float.MapKeyToFloat(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
MetadataToFloat
This transform extracts the entry from the metadata field of the observation features corresponding to the default map key (MapMetric.map_key_info.key) and inserts it into the parameter field.
Inheriting from the MetadataToFloat transform, this transform also adds a range (float) parameter to the search space. Similarly, users can override the default behavior by specifying the config with parameters as the key, where each entry maps a metadata key to a dictionary of keyword arguments for the corresponding RangeParameter constructor.
Transform is done in-place.
ax.modelbridge.transforms.rounding¶
- ax.modelbridge.transforms.rounding.contains_constrained_integer(search_space: SearchSpace, transform_parameters: set[str]) bool [source]¶
Check if any integer parameters are present in parameter_constraints.
Order constraints are ignored since strict rounding preserves ordering.
- ax.modelbridge.transforms.rounding.randomized_onehot_round(x: ndarray[Any, dtype[_ScalarType_co]]) ndarray[Any, dtype[_ScalarType_co]] [source]¶
Randomized rounding of x to a one-hot vector. x should be 0 <= x <= 1. If x includes negative values, they will be rounded to zero.
ax.modelbridge.transforms.search_space_to_choice¶
- class ax.modelbridge.transforms.search_space_to_choice.SearchSpaceToChoice(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Replaces the search space with a single choice parameter, whose values are the signatures of the arms observed in the data.
This transform is meant to be used with ThompsonSampler.
Choice parameter will be unordered unless config[“use_ordered”] specifies otherwise.
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.search_space_to_float¶
- class ax.modelbridge.transforms.standardize_y.StandardizeY(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: base_modelbridge.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Standardize Y, separately for each metric.
Transform is done in-place.
- transform_optimization_config(optimization_config: OptimizationConfig, modelbridge: base_modelbridge.ModelBridge | None = None, fixed_features: ObservationFeatures | None = None) OptimizationConfig [source]¶
Transform optimization config.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
optimization_config – The optimization config
Returns: transformed optimization config.
- untransform_outcome_constraints(outcome_constraints: list[OutcomeConstraint], fixed_features: ObservationFeatures | None = None) list[OutcomeConstraint] [source]¶
Untransform outcome constraints.
If outcome constraints are modified in transform_optimization_config, this method should reverse the portion of that transformation that was applied to the outcome constraints.
ax.modelbridge.transforms.stratified_standardize_y¶
- class ax.modelbridge.transforms.stratified_standardize_y.StratifiedStandardizeY(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Standardize Y, separately for each metric and for each value of a ChoiceParameter.
The name of the parameter by which to stratify the standardization can be specified in config[“parameter_name”]. If not specified, will use a task parameter if search space contains exactly 1 task parameter, and will raise an exception otherwise.
The stratification parameter must be fixed during generation if there are outcome constraints, in order to apply the standardization to the constraints.
Transform is done in-place.
- transform_observations(observations: list[Observation]) list[Observation] [source]¶
Transform observations.
Typically done in place. By default, the effort is split into separate transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: transformed observations.
- transform_optimization_config(optimization_config: OptimizationConfig, modelbridge: modelbridge_module.base.ModelBridge | None = None, fixed_features: ObservationFeatures | None = None) OptimizationConfig [source]¶
Transform optimization config.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
optimization_config – The optimization config
Returns: transformed optimization config.
- untransform_observations(observations: list[Observation]) list[Observation] [source]¶
Untransform observations.
Typically done in place. By default, the effort is split into separate backwards transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: untransformed observations.
- untransform_outcome_constraints(outcome_constraints: list[OutcomeConstraint], fixed_features: ObservationFeatures | None = None) list[OutcomeConstraint] [source]¶
Untransform outcome constraints.
If outcome constraints are modified in transform_optimization_config, this method should reverse the portion of that transformation that was applied to the outcome constraints.
ax.modelbridge.transforms.task_encode¶
- class ax.modelbridge.transforms.task_encode.TaskChoiceToIntTaskChoice(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
OrderedChoiceToIntegerRange
Convert task ChoiceParameters to integer-valued ChoiceParameters.
Parameters will be transformed to an integer ChoiceParameter with property is_task=True, mapping values from the original choice domain to a contiguous range integers 0, 1, …, n_choices-1.
In the inverse transform, parameters will be mapped back onto the original domain.
Transform is done in-place.
- class ax.modelbridge.transforms.task_encode.TaskEncode(*args: Any, **kwargs: Any)[source]¶
Bases:
DeprecatedTransformMixin
,TaskChoiceToIntTaskChoice
Deprecated alias for TaskChoiceToIntTaskChoice.
ax.modelbridge.transforms.time_as_feature¶
- class ax.modelbridge.transforms.time_as_feature.TimeAsFeature(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: 'modelbridge_module.base.ModelBridge' | None = None, config: TConfig | None = None)[source]¶
Bases:
Transform
Convert start time and duration into features that can be used for modeling.
If no end_time is available, the current time is used.
Duration is normalized to the unit cube.
Transform is done in-place.
TODO: revise this when better support for non-tunable features is added.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.transform_to_new_sq¶
- class ax.modelbridge.transforms.transform_to_new_sq.TransformToNewSQ(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
BaseRelativize
Map relative values of one batch to SQ of another.
Will compute the relative metrics for each arm in each batch, and will then turn those back into raw metrics but using the status quo values set on the Modelbridge.
This is useful if batches are comparable on a relative scale, but have offset in their status quo. This is often approximately true for online experiments run in separate batches.
Note that relativization is done using the delta method, so it will not simply be the ratio of the means.
- property control_as_constant: bool¶
Whether or not the control is treated as a constant in the model.
- transform_optimization_config(optimization_config: OptimizationConfig, modelbridge: modelbridge_module.base.ModelBridge | None = None, fixed_features: ObservationFeatures | None = None) OptimizationConfig [source]¶
Change the relative flag of the given relative optimization configuration to False. This is needed in order for the new opt config to pass ModelBridge that requires non-relativized opt config.
- Parameters:
opt_config – Optimization configuration relative to status quo.
- Returns:
Optimization configuration relative to status quo with relative flag equal to false.
- untransform_outcome_constraints(outcome_constraints: list[OutcomeConstraint], fixed_features: ObservationFeatures | None = None) list[OutcomeConstraint] [source]¶
Untransform outcome constraints.
If outcome constraints are modified in transform_optimization_config, this method should reverse the portion of that transformation that was applied to the outcome constraints.
ax.modelbridge.transforms.trial_as_task¶
- class ax.modelbridge.transforms.trial_as_task.TrialAsTask(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Convert trial to one or more task parameters.
How trial is mapped to parameter is specified with a map like {parameter_name: {trial_index: level name}}. For example, {“trial_param1”: {0: “level1”, 1: “level1”, 2: “level2”},} will create choice parameters “trial_param1” with is_task=True. Observations with trial 0 or 1 will have “trial_param1” set to “level1”, and those with trial 2 will have “trial_param1” set to “level2”. Multiple parameter names and mappings can be specified in this dict.
The trial level mapping can be specified in config[“trial_level_map”]. If not specified, defaults to a parameter with a level for every trial index.
For the reverse transform, if there are multiple mappings in the transform the trial will not be set.
The created parameter will be given a target value that will default to the lowest trial index in the mapping, or can be provided in config[“target_trial”].
Will raise if trial not specified for every point in the training data.
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.unit_x¶
- class ax.modelbridge.transforms.unit_x.UnitX(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Map X to [0, 1]^d for RangeParameter of type float and not log scale.
Uses bounds l <= x <= u, sets x_tilde_i = (x_i - l_i) / (u_i - l_i). Constraints wTx <= b are converted to gTx_tilde <= h, where g_i = w_i (u_i - l_i) and h = b - wTl.
Transform is done in-place.
- transform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Transform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features
Returns: transformed observation features
- untransform_observation_features(observation_features: list[ObservationFeatures]) list[ObservationFeatures] [source]¶
Untransform observation features.
This is typically done in-place. This class implements the identity transform (does nothing).
- Parameters:
observation_features – Observation features in the transformed space
Returns: observation features in the original space
ax.modelbridge.transforms.utils¶
- class ax.modelbridge.transforms.utils.ClosestLookupDict(*args: Any, **kwargs: Any)[source]¶
Bases:
dict
A dictionary with numeric keys that looks up the closest key.
- ax.modelbridge.transforms.utils.construct_new_search_space(search_space: SearchSpace, parameters: list[Parameter], parameter_constraints: list[ParameterConstraint] | None = None) SearchSpace [source]¶
Construct a search space with the transformed arguments.
If the search_space is a RobustSearchSpace, this will use its environmental variables and distributions, and remove the environmental variables from parameters before constructing.
- Parameters:
parameters – List of transformed parameter objects.
parameter_constraints – List of parameter constraints.
- Returns:
The new search space instance.
- ax.modelbridge.transforms.utils.derelativize_optimization_config_with_raw_status_quo(optimization_config: OptimizationConfig, modelbridge: modelbridge_module.base.ModelBridge, observations: list[Observation] | None) OptimizationConfig [source]¶
Derelativize optimization_config using raw status-quo values
- ax.modelbridge.transforms.utils.get_data(observation_data: list[ObservationData], metric_names: list[str] | None = None, raise_on_non_finite_data: bool = True) dict[str, list[float]] [source]¶
Extract all metrics if metric_names is None.
Raises a value error if any data is non-finite.
- Parameters:
observation_data – List of observation data.
metric_names – List of metric names.
raise_on_non_finite_data – If true, raises an exception on nan/inf.
- Returns:
A dictionary mapping metric names to lists of metric values.
- ax.modelbridge.transforms.utils.match_ci_width_truncated(mean: float, variance: float, transform: Callable[[float], float], level: float = 0.95, margin: float = 0.001, lower_bound: float = 0.0, upper_bound: float = 1.0, clip_mean: bool = False) tuple[float, float] [source]¶
Estimate a transformed variance using the match ci width method.
See log_y transform for the original. Here, bounds are forced to lie within a [lower_bound, upper_bound] interval after transformation.
ax.modelbridge.winsorize¶
- class ax.modelbridge.transforms.winsorize.Winsorize(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: dict[str, int | float | str | AcquisitionFunction | list[str] | dict[int, Any] | dict[str, Any] | OptimizationConfig | WinsorizationConfig | None] | None = None)[source]¶
Bases:
Transform
Clip the mean values for each metric to lay within the limits provided in the config. The config can contain either or both of two keys: -
"winsorization_config"
, corresponding to either a singleWinsorizationConfig
, which, if provided will be used for all metrics; or a mappingDict[str, WinsorizationConfig]
between each metric name and itsWinsorizationConfig
."derelativize_with_raw_status_quo"
, indicating whether to use the rawstatus-quo value for any derelativization. Note this defaults to
False
, which is unsupported and simply fails if derelativization is necessary. The user must specifyderelativize_with_raw_status_quo = True
in order for derelativization to succeed. Note that this must match the use_raw_status_quo value in theDerelativize
config if used.
For example,
{"winsorization_config": WinsorizationConfig(lower_quantile_margin=0.3)}
will specify the same 30% winsorization from below for all metrics, whereas ``` {“winsorization_config”: {
“metric_1”: WinsorizationConfig(lower_quantile_margin=0.2), “metric_2”: WinsorizationConfig(upper_quantile_margin=0.1),
}
}¶
will winsorize 20% from below for metric_1 and 10% from above from metric_2. Additional metrics won’t be winsorized.
You can also determine the winsorization cutoffs automatically without having an
OptimizationConfig
by passing in AUTO_WINS_QUANTILE for the quantile you want to winsorize. For example, to automatically winsorize large values:"m1": WinsorizationConfig(upper_quantile_margin=AUTO_WINS_QUANTILE)
.This may be useful when fitting models in a notebook where there is no corresponding
OptimizationConfig
.Additionally, you can pass in winsorization boundaries
lower_boundary
andupper_boundary``that specify a maximum allowable amount of winsorization. This is discouraged and will eventually be deprecated as we strongly encourage that users allow ``Winsorize
to automatically infer these boundaries from the optimization config.
ax.modelbridge.transforms.relativize¶
- class ax.modelbridge.transforms.relativize.BaseRelativize(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
-
Change the relative flag of the given relative optimization configuration to False. This is needed in order for the new opt config to pass ModelBridge that requires non-relativized opt config.
Also transforms absolute data and opt configs to relative.
Requires a modelbridge with a status quo set to work.
Abstract property control_as_constant is set to True/False in its subclasses Relativize and RelativizeWithConstantControl respectively to account for appropriate transform/untransform differently.
- abstract property control_as_constant: bool¶
Whether or not the control is treated as a constant in the model.
- transform_observations(observations: list[Observation]) list[Observation] [source]¶
Transform observations.
Typically done in place. By default, the effort is split into separate transformations of the features and the data.
- Parameters:
observations – Observations.
Returns: transformed observations.
- transform_optimization_config(optimization_config: OptimizationConfig, modelbridge: modelbridge_module.base.ModelBridge | None = None, fixed_features: ObservationFeatures | None = None) OptimizationConfig [source]¶
Change the relative flag of the given relative optimization configuration to False. This is needed in order for the new opt config to pass ModelBridge that requires non-relativized opt config.
- Parameters:
opt_config – Optimization configuration relative to status quo.
- Returns:
Optimization configuration relative to status quo with relative flag equal to false.
- untransform_observations(observations: list[Observation]) list[Observation] [source]¶
Unrelativize the data
- untransform_outcome_constraints(outcome_constraints: list[OutcomeConstraint], fixed_features: ObservationFeatures | None = None) list[OutcomeConstraint] [source]¶
Untransform outcome constraints.
If outcome constraints are modified in transform_optimization_config, this method should reverse the portion of that transformation that was applied to the outcome constraints.
- class ax.modelbridge.transforms.relativize.Relativize(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
BaseRelativize
Relative transform that by applying delta method.
Note that not all valid-valued relativized mean and standard error can be unrelativized when control_as_constant=True. See utils.stats.statstools.unrelativize for more details.
- class ax.modelbridge.transforms.relativize.RelativizeWithConstantControl(search_space: SearchSpace | None = None, observations: list[Observation] | None = None, modelbridge: modelbridge_module.base.ModelBridge | None = None, config: TConfig | None = None)[source]¶
Bases:
BaseRelativize
Relative transform that treats the control metric as a constant when transforming and untransforming the data.
- ax.modelbridge.transforms.relativize.get_metric_index(data: ObservationData, metric_name: str) int [source]¶
Get the index of a metric in the ObservationData.