Multi-Objective Optimization Ax API
Using the Service API
For Multi-objective optimization (MOO) in the AxClient, objectives are specified
through the ObjectiveProperties dataclass. An ObjectiveProperties requires a boolean
minimize, and also accepts an optional floating point threshold. If a threshold is
not specified, Ax will infer it through the use of heuristics. If the user knows the
region of interest (because they have specs or prior knowledge), then specifying the
thresholds is preferable to inferring it. But if the user would need to guess, inferring
is preferable.
To learn more about how to choose a threshold, see Set Objective Thresholds to focus candidate generation in a region of interest. See the Service API Tutorial for more infomation on running experiments with the Service API.
import sys
in_colab = 'google.colab' in sys.modules
if in_colab:
%pip install ax-platform
import torch
from ax.plot.pareto_frontier import plot_pareto_frontier
from ax.plot.pareto_utils import compute_posterior_pareto_frontier
from ax.service.ax_client import AxClient
from ax.service.utils.instantiation import ObjectiveProperties
# Plotting imports and initialization
from ax.utils.notebook.plotting import init_notebook_plotting, render
from botorch.test_functions.multi_objective import BraninCurrin
import plotly.io as pio
init_notebook_plotting()
if in_colab:
pio.renderers.default = "colab"
[INFO 02-03 18:54:09] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
[INFO 02-03 18:54:09] ax.utils.notebook.plotting: Please see
(https://ax.dev/tutorials/visualizations.html#Fix-for-plots-that-are-not-rendering)
if visualizations are not rendering.
# Load our sample 2-objective problem
branin_currin = BraninCurrin(negate=True).to(
dtype=torch.double,
device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
ax_client = AxClient()
ax_client.create_experiment(
name="moo_experiment",
parameters=[
{
"name": f"x{i+1}",
"type": "range",
"bounds": [0.0, 1.0],
}
for i in range(2)
],
objectives={
# `threshold` arguments are optional
"a": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[0]),
"b": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[1]),
},
overwrite_existing_experiment=True,
is_test=True,
)
[INFO 02-03 18:54:09] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the verbose_logging argument to False. Note that float values in the logs are rounded to 6 decimal points.
[INFO 02-03 18:54:09] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 02-03 18:54:09] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 02-03 18:54:09] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]).
[INFO 02-03 18:54:09] ax.core.experiment: The is_test flag has been set to True. This flag is meant purely for development and integration testing purposes. If you are running a live experiment, please set this flag to False
[INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: Using Models.BOTORCH_MODULAR since there is at least one ordered parameter and there are no unordered categorical parameters.
[INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=2 num_trials=None use_batch_trials=False
[INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=5
[INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=5
[INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: verbose, disable_progbar, and jit_compile are not yet supported when using choose_generation_strategy with ModularBoTorchModel, dropping these arguments.
[INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+BoTorch', steps=[Sobol for 5 trials, BoTorch for subsequent trials]). Iterations after 5 will take longer to generate due to model-fitting.
Create an Evaluation Function
In the case of MOO experiments, evaluation functions can be any arbitrary function that
takes in a dict of parameter names mapped to values and returns a dict of objective
names mapped to a tuple of mean and SEM values.
def evaluate(parameters):
evaluation = branin_currin(
torch.tensor([parameters.get("x1"), parameters.get("x2")])
)
# In our case, standard error is 0, since we are computing a synthetic function.
# Set standard error to None if the noise level is unknown.
return {"a": (evaluation[0].item(), 0.0), "b": (evaluation[1].item(), 0.0)}
Run Optimization
for i in range(25):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
/home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.639492, 'x2': 0.556009} using model Sobol.
[INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 0 with data: {'a': (-56.800846, 0.0), 'b': (-6.504035, 0.0)}.
/home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.22491, 'x2': 0.223322} using model Sobol.
[INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 1 with data: {'a': (-40.606293, 0.0), 'b': (-12.322504, 0.0)}.
/home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.300066, 'x2': 0.960166} using model Sobol.
[INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 2 with data: {'a': (-75.828926, 0.0), 'b': (-5.42404, 0.0)}.
/home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.839709, 'x2': 0.260504} using model Sobol.
[INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 3 with data: {'a': (-18.921333, 0.0), 'b': (-8.860395, 0.0)}.
/home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.941101, 'x2': 0.809305} using model Sobol.
[INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 4 with data: {'a': (-99.10479, 0.0), 'b': (-4.716988, 0.0)}.
[INFO 02-03 18:54:11] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 1.0, 'x2': 0.443529} using model BoTorch.
[INFO 02-03 18:54:11] ax.service.ax_client: Completed trial 5 with data: {'a': (-15.265464, 0.0), 'b': (-6.882358, 0.0)}.
[INFO 02-03 18:54:12] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.81839, 'x2': 1.0} using model BoTorch.
[INFO 02-03 18:54:12] ax.service.ax_client: Completed trial 6 with data: {'a': (-204.06517, 0.0), 'b': (-4.102039, 0.0)}.
[INFO 02-03 18:54:13] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.90544, 'x2': 0.600836} using model BoTorch.
[INFO 02-03 18:54:13] ax.service.ax_client: Completed trial 7 with data: {'a': (-54.835285, 0.0), 'b': (-5.806404, 0.0)}.
[INFO 02-03 18:54:15] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.450589, 'x2': 0.763866} using model BoTorch.
[INFO 02-03 18:54:15] ax.service.ax_client: Completed trial 8 with data: {'a': (-69.947433, 0.0), 'b': (-5.797382, 0.0)}.
[INFO 02-03 18:54:16] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.0, 'x2': 0.753291} using model BoTorch.
[INFO 02-03 18:54:16] ax.service.ax_client: Completed trial 9 with data: {'a': (-47.392197, 0.0), 'b': (-1.455256, 0.0)}.
[INFO 02-03 18:54:17] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 1.0, 'x2': 0.0} using model BoTorch.
[INFO 02-03 18:54:17] ax.service.ax_client: Completed trial 10 with data: {'a': (-10.960894, 0.0), 'b': (-10.179487, 0.0)}.
[INFO 02-03 18:54:19] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.187868, 'x2': 0.60096} using model BoTorch.
[INFO 02-03 18:54:19] ax.service.ax_client: Completed trial 11 with data: {'a': (-5.642111, 0.0), 'b': (-7.740734, 0.0)}.
[INFO 02-03 18:54:20] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.0, 'x2': 0.493925} using model BoTorch.
[INFO 02-03 18:54:20] ax.service.ax_client: Completed trial 12 with data: {'a': (-108.342644, 0.0), 'b': (-1.909854, 0.0)}.
[INFO 02-03 18:54:21] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.024994, 'x2': 1.0} using model BoTorch.
[INFO 02-03 18:54:21] ax.service.ax_client: Completed trial 13 with data: {'a': (-10.42739, 0.0), 'b': (-2.187852, 0.0)}.
[INFO 02-03 18:54:22] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.088597, 'x2': 0.916316} using model BoTorch.
[INFO 02-03 18:54:22] ax.service.ax_client: Completed trial 14 with data: {'a': (-1.738521, 0.0), 'b': (-4.522746, 0.0)}.
[INFO 02-03 18:54:24] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.053405, 'x2': 0.963678} using model BoTorch.
[INFO 02-03 18:54:24] ax.service.ax_client: Completed trial 15 with data: {'a': (-5.538787, 0.0), 'b': (-3.317845, 0.0)}.
[INFO 02-03 18:54:26] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.008754, 'x2': 1.0} using model BoTorch.
[INFO 02-03 18:54:26] ax.service.ax_client: Completed trial 16 with data: {'a': (-14.77444, 0.0), 'b': (-1.53793, 0.0)}.
[INFO 02-03 18:54:28] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.071137, 'x2': 1.0} using model BoTorch.
[INFO 02-03 18:54:28] ax.service.ax_client: Completed trial 17 with data: {'a': (-3.801824, 0.0), 'b': (-3.775479, 0.0)}.
[INFO 02-03 18:54:31] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.039145, 'x2': 1.0} using model BoTorch.
[INFO 02-03 18:54:31] ax.service.ax_client: Completed trial 18 with data: {'a': (-7.456717, 0.0), 'b': (-2.724951, 0.0)}.
[INFO 02-03 18:54:33] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.109206, 'x2': 0.870326} using model BoTorch.
[INFO 02-03 18:54:33] ax.service.ax_client: Completed trial 19 with data: {'a': (-0.689599, 0.0), 'b': (-5.174188, 0.0)}.
[INFO 02-03 18:54:36] ax.service.ax_client: Generated new trial 20 with parameters {'x1': 0.077207, 'x2': 0.952719} using model BoTorch.
[INFO 02-03 18:54:36] ax.service.ax_client: Completed trial 20 with data: {'a': (-2.730387, 0.0), 'b': (-4.092929, 0.0)}.
[INFO 02-03 18:54:39] ax.service.ax_client: Generated new trial 21 with parameters {'x1': 0.016634, 'x2': 1.0} using model BoTorch.
[INFO 02-03 18:54:39] ax.service.ax_client: Completed trial 21 with data: {'a': (-12.544319, 0.0), 'b': (-1.85649, 0.0)}.
[INFO 02-03 18:54:41] ax.service.ax_client: Generated new trial 22 with parameters {'x1': 0.553087, 'x2': 0.0} using model BoTorch.
[INFO 02-03 18:54:41] ax.service.ax_client: Completed trial 22 with data: {'a': (-5.167106, 0.0), 'b': (-11.387817, 0.0)}.
[INFO 02-03 18:54:44] ax.service.ax_client: Generated new trial 23 with parameters {'x1': 0.031798, 'x2': 1.0} using model BoTorch.
[INFO 02-03 18:54:44] ax.service.ax_client: Completed trial 23 with data: {'a': (-8.900707, 0.0), 'b': (-2.450435, 0.0)}.
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal')]
Trying again with a new set of initial conditions.
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
[INFO 02-03 18:54:52] ax.service.ax_client: Generated new trial 24 with parameters {'x1': 0.047113, 'x2': 1.0} using model BoTorch.
[INFO 02-03 18:54:52] ax.service.ax_client: Completed trial 24 with data: {'a': (-6.138559, 0.0), 'b': (-3.010158, 0.0)}.
Plot Pareto Frontier
objectives = ax_client.experiment.optimization_config.objective.objectives
frontier = compute_posterior_pareto_frontier(
experiment=ax_client.experiment,
data=ax_client.experiment.fetch_data(),
primary_objective=objectives[1].metric,
secondary_objective=objectives[0].metric,
absolute_metrics=["a", "b"],
num_points=20,
)
render(plot_pareto_frontier(frontier, CI_level=0.90))
Deep Dive
In the rest of this tutorial, we will show two algorithms available in Ax for multi-objective optimization and visualize how they compare to eachother and to quasirandom search.
MOO covers the case where we care about multiple outcomes in our experiment but we do
not know before hand a specific weighting of those objectives (covered by
ScalarizedObjective) or a specific constraint on one objective (covered by
OutcomeConstraints) that will produce the best result.
The solution in this case is to find a whole Pareto frontier, a surface in outcome-space containing points that can't be improved on in every outcome. This shows us the tradeoffs between objectives that we can choose to make.
Problem Statement
Optimize a list of M objective functions over a bounded search space .
We assume are expensive-to-evaluate black-box functions with no known analytical expression, and no observed gradients. For instance, a machine learning model where we're interested in maximizing accuracy and minimizing inference time, with the set of possible configuration spaces