Multi-Objective Optimization Ax API
For Multi-objective optimization (MOO) in the AxClient
, objectives are specified
through the ObjectiveProperties
dataclass. An ObjectiveProperties
requires a boolean
minimize
, and also accepts an optional floating point threshold
. If a threshold
is
not specified, Ax will infer it through the use of heuristics. If the user knows the
region of interest (because they have specs or prior knowledge), then specifying the
thresholds is preferable to inferring it. But if the user would need to guess, inferring
is preferable.
To learn more about how to choose a threshold, see
Set Objective Thresholds to focus candidate generation in a region of interest.
See the Service API Tutorial for more
infomation on running experiments with the Service API.
import sys
in_colab = 'google.colab' in sys.modules
if in_colab:
%pip install ax-platform
import torch
from ax.plot.pareto_frontier import plot_pareto_frontier
from ax.plot.pareto_utils import compute_posterior_pareto_frontier
from ax.service.ax_client import AxClient
from ax.service.utils.instantiation import ObjectiveProperties
from ax.utils.notebook.plotting import init_notebook_plotting, render
from botorch.test_functions.multi_objective import BraninCurrin
import plotly.io as pio
init_notebook_plotting()
if in_colab:
pio.renderers.default = "colab"
Out: [INFO 02-03 18:54:09] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
Out: [INFO 02-03 18:54:09] ax.utils.notebook.plotting: Please see
(https://ax.dev/tutorials/visualizations.html#Fix-for-plots-that-are-not-rendering)
if visualizations are not rendering.
branin_currin = BraninCurrin(negate=True).to(
dtype=torch.double,
device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
ax_client = AxClient()
ax_client.create_experiment(
name="moo_experiment",
parameters=[
{
"name": f"x{i+1}",
"type": "range",
"bounds": [0.0, 1.0],
}
for i in range(2)
],
objectives={
"a": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[0]),
"b": ObjectiveProperties(minimize=False, threshold=branin_currin.ref_point[1]),
},
overwrite_existing_experiment=True,
is_test=True,
)
Out: [INFO 02-03 18:54:09] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the verbose_logging argument to False. Note that float values in the logs are rounded to 6 decimal points.
Out: [INFO 02-03 18:54:09] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:54:09] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:54:09] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]).
Out: [INFO 02-03 18:54:09] ax.core.experiment: The is_test flag has been set to True. This flag is meant purely for development and integration testing purposes. If you are running a live experiment, please set this flag to False
Out: [INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: Using Models.BOTORCH_MODULAR since there is at least one ordered parameter and there are no unordered categorical parameters.
Out: [INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=2 num_trials=None use_batch_trials=False
Out: [INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=5
Out: [INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=5
Out: [INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: verbose, disable_progbar, and jit_compile are not yet supported when using choose_generation_strategy with ModularBoTorchModel, dropping these arguments.
Out: [INFO 02-03 18:54:09] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+BoTorch', steps=[Sobol for 5 trials, BoTorch for subsequent trials]). Iterations after 5 will take longer to generate due to model-fitting.
In the case of MOO experiments, evaluation functions can be any arbitrary function that
takes in a dict
of parameter names mapped to values and returns a dict
of objective
names mapped to a tuple
of mean and SEM values.
def evaluate(parameters):
evaluation = branin_currin(
torch.tensor([parameters.get("x1"), parameters.get("x2")])
)
return {"a": (evaluation[0].item(), 0.0), "b": (evaluation[1].item(), 0.0)}
for i in range(25):
parameters, trial_index = ax_client.get_next_trial()
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.639492, 'x2': 0.556009} using model Sobol.
Out: [INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 0 with data: {'a': (-56.800846, 0.0), 'b': (-6.504035, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.22491, 'x2': 0.223322} using model Sobol.
Out: [INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 1 with data: {'a': (-40.606293, 0.0), 'b': (-12.322504, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.300066, 'x2': 0.960166} using model Sobol.
Out: [INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 2 with data: {'a': (-75.828926, 0.0), 'b': (-5.42404, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.839709, 'x2': 0.260504} using model Sobol.
Out: [INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 3 with data: {'a': (-18.921333, 0.0), 'b': (-8.860395, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:54:10] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.941101, 'x2': 0.809305} using model Sobol.
Out: [INFO 02-03 18:54:10] ax.service.ax_client: Completed trial 4 with data: {'a': (-99.10479, 0.0), 'b': (-4.716988, 0.0)}.
Out: [INFO 02-03 18:54:11] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 1.0, 'x2': 0.443529} using model BoTorch.
Out: [INFO 02-03 18:54:11] ax.service.ax_client: Completed trial 5 with data: {'a': (-15.265464, 0.0), 'b': (-6.882358, 0.0)}.
Out: [INFO 02-03 18:54:12] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.81839, 'x2': 1.0} using model BoTorch.
Out: [INFO 02-03 18:54:12] ax.service.ax_client: Completed trial 6 with data: {'a': (-204.06517, 0.0), 'b': (-4.102039, 0.0)}.
Out: [INFO 02-03 18:54:13] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.90544, 'x2': 0.600836} using model BoTorch.
Out: [INFO 02-03 18:54:13] ax.service.ax_client: Completed trial 7 with data: {'a': (-54.835285, 0.0), 'b': (-5.806404, 0.0)}.
Out: [INFO 02-03 18:54:15] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.450589, 'x2': 0.763866} using model BoTorch.
Out: [INFO 02-03 18:54:15] ax.service.ax_client: Completed trial 8 with data: {'a': (-69.947433, 0.0), 'b': (-5.797382, 0.0)}.
Out: [INFO 02-03 18:54:16] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.0, 'x2': 0.753291} using model BoTorch.
Out: [INFO 02-03 18:54:16] ax.service.ax_client: Completed trial 9 with data: {'a': (-47.392197, 0.0), 'b': (-1.455256, 0.0)}.
Out: [INFO 02-03 18:54:17] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 1.0, 'x2': 0.0} using model BoTorch.
Out: [INFO 02-03 18:54:17] ax.service.ax_client: Completed trial 10 with data: {'a': (-10.960894, 0.0), 'b': (-10.179487, 0.0)}.
Out: [INFO 02-03 18:54:19] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.187868, 'x2': 0.60096} using model BoTorch.
Out: [INFO 02-03 18:54:19] ax.service.ax_client: Completed trial 11 with data: {'a': (-5.642111, 0.0), 'b': (-7.740734, 0.0)}.
Out: [INFO 02-03 18:54:20] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.0, 'x2': 0.493925} using model BoTorch.
Out: [INFO 02-03 18:54:20] ax.service.ax_client: Completed trial 12 with data: {'a': (-108.342644, 0.0), 'b': (-1.909854, 0.0)}.
Out: [INFO 02-03 18:54:21] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.024994, 'x2': 1.0} using model BoTorch.
Out: [INFO 02-03 18:54:21] ax.service.ax_client: Completed trial 13 with data: {'a': (-10.42739, 0.0), 'b': (-2.187852, 0.0)}.
Out: [INFO 02-03 18:54:22] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.088597, 'x2': 0.916316} using model BoTorch.
Out: [INFO 02-03 18:54:22] ax.service.ax_client: Completed trial 14 with data: {'a': (-1.738521, 0.0), 'b': (-4.522746, 0.0)}.
Out: [INFO 02-03 18:54:24] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.053405, 'x2': 0.963678} using model BoTorch.
Out: [INFO 02-03 18:54:24] ax.service.ax_client: Completed trial 15 with data: {'a': (-5.538787, 0.0), 'b': (-3.317845, 0.0)}.
Out: [INFO 02-03 18:54:26] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.008754, 'x2': 1.0} using model BoTorch.
Out: [INFO 02-03 18:54:26] ax.service.ax_client: Completed trial 16 with data: {'a': (-14.77444, 0.0), 'b': (-1.53793, 0.0)}.
Out: [INFO 02-03 18:54:28] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.071137, 'x2': 1.0} using model BoTorch.
Out: [INFO 02-03 18:54:28] ax.service.ax_client: Completed trial 17 with data: {'a': (-3.801824, 0.0), 'b': (-3.775479, 0.0)}.
Out: [INFO 02-03 18:54:31] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.039145, 'x2': 1.0} using model BoTorch.
Out: [INFO 02-03 18:54:31] ax.service.ax_client: Completed trial 18 with data: {'a': (-7.456717, 0.0), 'b': (-2.724951, 0.0)}.
Out: [INFO 02-03 18:54:33] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.109206, 'x2': 0.870326} using model BoTorch.
Out: [INFO 02-03 18:54:33] ax.service.ax_client: Completed trial 19 with data: {'a': (-0.689599, 0.0), 'b': (-5.174188, 0.0)}.
Out: [INFO 02-03 18:54:36] ax.service.ax_client: Generated new trial 20 with parameters {'x1': 0.077207, 'x2': 0.952719} using model BoTorch.
Out: [INFO 02-03 18:54:36] ax.service.ax_client: Completed trial 20 with data: {'a': (-2.730387, 0.0), 'b': (-4.092929, 0.0)}.
Out: [INFO 02-03 18:54:39] ax.service.ax_client: Generated new trial 21 with parameters {'x1': 0.016634, 'x2': 1.0} using model BoTorch.
Out: [INFO 02-03 18:54:39] ax.service.ax_client: Completed trial 21 with data: {'a': (-12.544319, 0.0), 'b': (-1.85649, 0.0)}.
Out: [INFO 02-03 18:54:41] ax.service.ax_client: Generated new trial 22 with parameters {'x1': 0.553087, 'x2': 0.0} using model BoTorch.
Out: [INFO 02-03 18:54:41] ax.service.ax_client: Completed trial 22 with data: {'a': (-5.167106, 0.0), 'b': (-11.387817, 0.0)}.
Out: [INFO 02-03 18:54:44] ax.service.ax_client: Generated new trial 23 with parameters {'x1': 0.031798, 'x2': 1.0} using model BoTorch.
Out: [INFO 02-03 18:54:44] ax.service.ax_client: Completed trial 23 with data: {'a': (-8.900707, 0.0), 'b': (-2.450435, 0.0)}.
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal')]
Trying again with a new set of initial conditions.
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
[INFO 02-03 18:54:52] ax.service.ax_client: Generated new trial 24 with parameters {'x1': 0.047113, 'x2': 1.0} using model BoTorch.
Out: [INFO 02-03 18:54:52] ax.service.ax_client: Completed trial 24 with data: {'a': (-6.138559, 0.0), 'b': (-3.010158, 0.0)}.
objectives = ax_client.experiment.optimization_config.objective.objectives
frontier = compute_posterior_pareto_frontier(
experiment=ax_client.experiment,
data=ax_client.experiment.fetch_data(),
primary_objective=objectives[1].metric,
secondary_objective=objectives[0].metric,
absolute_metrics=["a", "b"],
num_points=20,
)
render(plot_pareto_frontier(frontier, CI_level=0.90))
Deep Dive
In the rest of this tutorial, we will show two algorithms available in Ax for
multi-objective optimization and visualize how they compare to eachother and to
quasirandom search.
MOO covers the case where we care about multiple outcomes in our experiment but we do
not know before hand a specific weighting of those objectives (covered by
ScalarizedObjective
) or a specific constraint on one objective (covered by
OutcomeConstraint
s) that will produce the best result.
The solution in this case is to find a whole Pareto frontier, a surface in outcome-space
containing points that can't be improved on in every outcome. This shows us the
tradeoffs between objectives that we can choose to make.
Optimize a list of M objective functions (f(1)(x),...,f(M)(x))
over a bounded search space X⊂Rd.
We assume f(i) are expensive-to-evaluate black-box functions with no known
analytical expression, and no observed gradients. For instance, a machine learning model
where we're interested in maximizing accuracy and minimizing inference time, with
X the set of possible configuration spaces
In a multi-objective optimization problem, there typically is no single best solution.
Rather, the goal is to identify the set of Pareto optimal solutions such that any
improvement in one objective means deteriorating another. Provided with the Pareto set,
decision-makers can select an objective trade-off according to their preferences. In the
plot below, the red dots are the Pareto optimal solutions (assuming both objectives are
to be minimized). 
Given a reference point r∈RM, which we represent as a list of M
ObjectiveThreshold
s, one for each coordinate, the hypervolume (HV) of a Pareto set
P=f(xi)i=1∣P∣ is the volume of the space dominated
(superior in every one of our M objectives) by P and bounded from above by a
point r. The reference point should be set to be slightly worse (10% is reasonable)
than the worst value of each objective that a decision maker would tolerate. In the
figure below, the grey area is the hypervolume in this 2-objective problem.

The below plots show three different sets of points generated by the qNEHVI [1]
algorithm with different objective thresholds (aka reference points). Note that here we
use absolute thresholds, but thresholds can also be relative to a status_quo arm.
The first plot shows the points without the ObjectiveThreshold
s visible (they're set
far below the origin of the graph).
The second shows the points generated with (-18, -6) as thresholds. The regions
violating the thresholds are greyed out. Only the white region in the upper right
exceeds both threshold, points in this region dominate the intersection of these
thresholds (this intersection is the reference point). Only points in this region
contribute to the hypervolume objective. A few exploration points are not in the valid
region, but almost all the rest of the points are.
The third shows points generated with a very strict pair of thresholds, (-18, -2). Only
the white region in the upper right exceeds both thresholds. Many points do not lie in
the dominating region, but there are still more focused there than in the second
examples.

A deeper explanation of our the qNEHVI [1] and qNParEGO [2] algorithms this notebook
explores can be found at
[1]
S. Daulton, M. Balandat, and E. Bakshy. Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement. Advances in Neural Information Processing Systems 34, 2021.
[2]
S. Daulton, M. Balandat, and E. Bakshy. Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization. Advances in Neural Information Processing Systems 33, 2020.
In addition, the underlying BoTorch implementation has a researcher-oriented tutorial at
https://botorch.org/tutorials/multi_objective_bo.
import numpy as np
import pandas as pd
from ax.core.data import Data
from ax.core.experiment import Experiment
from ax.core.metric import Metric
from ax.core.objective import MultiObjective, Objective
from ax.core.optimization_config import (
MultiObjectiveOptimizationConfig,
ObjectiveThreshold,
)
from ax.core.parameter import ParameterType, RangeParameter
from ax.core.search_space import SearchSpace
from ax.metrics.noisy_function import NoisyFunctionMetric
from ax.modelbridge.modelbridge_utils import observed_hypervolume
from ax.modelbridge.registry import Models
from ax.runners.synthetic import SyntheticRunner
from ax.service.utils.report_utils import exp_to_df
from botorch.acquisition.multi_objective.parego import qLogNParEGO
x1 = RangeParameter(name="x1", lower=0, upper=1, parameter_type=ParameterType.FLOAT)
x2 = RangeParameter(name="x2", lower=0, upper=1, parameter_type=ParameterType.FLOAT)
search_space = SearchSpace(parameters=[x1, x2])
To optimize multiple objective we must create a MultiObjective
containing the metrics
we'll optimize and MultiObjectiveOptimizationConfig
(which contains
ObjectiveThreshold
s) instead of our more typical Objective
and OptimizationConfig
We define NoisyFunctionMetric
s to wrap our synthetic Branin-Currin problem's outputs.
Add noise to see how robust our different optimization algorithms are.
class MetricA(NoisyFunctionMetric):
def f(self, x: np.ndarray) -> float:
return float(branin_currin(torch.tensor(x))[0])
class MetricB(NoisyFunctionMetric):
def f(self, x: np.ndarray) -> float:
return float(branin_currin(torch.tensor(x))[1])
metric_a = MetricA("a", ["x1", "x2"], noise_sd=0.0, lower_is_better=False)
metric_b = MetricB("b", ["x1", "x2"], noise_sd=0.0, lower_is_better=False)
mo = MultiObjective(
objectives=[Objective(metric=metric_a), Objective(metric=metric_b)],
)
objective_thresholds = [
ObjectiveThreshold(metric=metric, bound=val, relative=False)
for metric, val in zip(mo.metrics, branin_currin.ref_point)
]
optimization_config = MultiObjectiveOptimizationConfig(
objective=mo,
objective_thresholds=objective_thresholds,
)
These construct our experiment, then initialize with Sobol points before we fit a
Gaussian Process model to those initial points.
def build_experiment():
experiment = Experiment(
name="pareto_experiment",
search_space=search_space,
optimization_config=optimization_config,
runner=SyntheticRunner(),
)
return experiment
def initialize_experiment(experiment):
sobol = Models.SOBOL(search_space=experiment.search_space, seed=1234)
for _ in range(N_INIT):
experiment.new_trial(sobol.gen(1)).run()
return experiment.fetch_data()
Sobol
We use quasirandom points as a fast baseline for evaluating the quality of our
multi-objective optimization algorithms.
sobol_experiment = build_experiment()
sobol_data = initialize_experiment(sobol_experiment)
sobol_model = Models.SOBOL(
experiment=sobol_experiment,
data=sobol_data,
)
sobol_hv_list = []
for i in range(N_BATCH):
generator_run = sobol_model.gen(1)
trial = sobol_experiment.new_trial(generator_run=generator_run)
trial.run()
exp_df = exp_to_df(sobol_experiment)
outcomes = np.array(exp_df[["a", "b"]], dtype=np.double)
dummy_model = Models.BOTORCH_MODULAR(
experiment=sobol_experiment,
data=sobol_experiment.fetch_data(),
)
try:
hv = observed_hypervolume(modelbridge=dummy_model)
except:
hv = 0
print("Failed to compute hv")
sobol_hv_list.append(hv)
print(f"Iteration: {i}, HV: {hv}")
sobol_outcomes = np.array(exp_to_df(sobol_experiment)[["a", "b"]], dtype=np.double)
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
/home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 0, HV: 0.0
Iteration: 1, HV: 0.0
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
/home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 2, HV: 0.0
Iteration: 3, HV: 0.0
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
/home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 4, HV: 0.0
Iteration: 5, HV: 0.0
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
/home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 6, HV: 0.0
Out: Iteration: 7, HV: 0.0
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 8, HV: 0.0
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 9, HV: 0.0
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 10, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 11, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 12, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 13, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 14, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 15, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 16, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 17, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 18, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 19, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 20, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 21, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 22, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 23, HV: 28.586963178726865
Out: /home/runner/work/Ax/Ax/ax/core/data.py:295: FutureWarning:
The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
Out: Iteration: 24, HV: 28.586963178726865
Noisy Expected Hypervolume Improvement. This is our current recommended algorithm for
multi-objective optimization.
ehvi_experiment = build_experiment()
ehvi_data = initialize_experiment(ehvi_experiment)
ehvi_hv_list = []
ehvi_model = None
for i in range(N_BATCH):
ehvi_model = Models.BOTORCH_MODULAR(
experiment=ehvi_experiment,
data=ehvi_data,
)
generator_run = ehvi_model.gen(1)
trial = ehvi_experiment.new_trial(generator_run=generator_run)
trial.run()
ehvi_data = Data.from_multiple_data([ehvi_data, trial.fetch_data()])
exp_df = exp_to_df(ehvi_experiment)
outcomes = np.array(exp_df[["a", "b"]], dtype=np.double)
try:
hv = observed_hypervolume(modelbridge=ehvi_model)
except:
hv = 0
print("Failed to compute hv")
ehvi_hv_list.append(hv)
print(f"Iteration: {i}, HV: {hv}")
ehvi_outcomes = np.array(exp_to_df(ehvi_experiment)[["a", "b"]], dtype=np.double)
Out: Iteration: 0, HV: 0.0
Out: Iteration: 1, HV: 0.0
Out: Iteration: 2, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 3, HV: 2.369795709893773
Out: Iteration: 4, HV: 2.369795709893773
Out: Iteration: 5, HV: 32.94757671976549
Out: Iteration: 6, HV: 44.25430688748453
Out: Iteration: 7, HV: 46.11925211936741
Out: Iteration: 8, HV: 46.11925211936741
Out: Iteration: 9, HV: 49.18007420754867
Out: Iteration: 10, HV: 51.212207136398376
Out: Iteration: 11, HV: 52.87999252023109
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal')]
Trying again with a new set of initial conditions.
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed on the second try, after generating a new set of initial conditions.
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 12, HV: 53.60696699268871
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .')]
Trying again with a new set of initial conditions.
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 13, HV: 54.34329480324962
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-07 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-06 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-05 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-04 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-03 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal')]
Trying again with a new set of initial conditions.
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed on the second try, after generating a new set of initial conditions.
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 14, HV: 54.822694516835085
Out: Iteration: 15, HV: 55.290761678067156
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 16, HV: 55.72412004820295
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal')]
Trying again with a new set of initial conditions.
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed on the second try, after generating a new set of initial conditions.
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 17, HV: 55.9226663252187
Out: Iteration: 18, HV: 56.275792468420995
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-07 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-06 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-05 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-07 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-06 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-05 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-07 to the diagonal'), OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal'), NumericalWarning('A not p.d., added jitter of 1.0e-08 to the diagonal')]
Trying again with a new set of initial conditions.
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 19, HV: 56.47590017951882
Out: Iteration: 20, HV: 56.65521233151773
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 21, HV: 56.83454729153189
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 22, HV: 56.83454729153189
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 23, HV: 57.01271107560452
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 24, HV: 57.190854885503185
Plot qNEHVI Pareto Frontier based on model posterior
The plotted points are samples from the fitted model's posterior, not observed samples.
frontier = compute_posterior_pareto_frontier(
experiment=ehvi_experiment,
data=ehvi_experiment.fetch_data(),
primary_objective=metric_b,
secondary_objective=metric_a,
absolute_metrics=["a", "b"],
num_points=20,
)
render(plot_pareto_frontier(frontier, CI_level=0.90))
This is a good alternative algorithm for multi-objective optimization when qNEHVI runs
too slowly. We use qLogNParEGO
acquisition function with Modular BoTorch Model.
parego_experiment = build_experiment()
parego_data = initialize_experiment(parego_experiment)
parego_hv_list = []
parego_model = None
for i in range(N_BATCH):
parego_model = Models.BOTORCH_MODULAR(
experiment=parego_experiment,
data=parego_data,
botorch_acqf_class=qLogNParEGO,
)
generator_run = parego_model.gen(1)
trial = parego_experiment.new_trial(generator_run=generator_run)
trial.run()
parego_data = Data.from_multiple_data([parego_data, trial.fetch_data()])
exp_df = exp_to_df(parego_experiment)
outcomes = np.array(exp_df[["a", "b"]], dtype=np.double)
try:
hv = observed_hypervolume(modelbridge=parego_model)
except:
hv = 0
print("Failed to compute hv")
parego_hv_list.append(hv)
print(f"Iteration: {i}, HV: {hv}")
parego_outcomes = np.array(exp_to_df(parego_experiment)[["a", "b"]], dtype=np.double)
Out: Iteration: 0, HV: 0.0
Out: Iteration: 1, HV: 0.0
Out: Iteration: 2, HV: 0.0
Out: Iteration: 3, HV: 0.0
Out: Iteration: 4, HV: 0.0
Out: Iteration: 5, HV: 0.0
Out: Iteration: 6, HV: 0.0
Out: Iteration: 7, HV: 0.0
Out: Iteration: 8, HV: 0.0
Out: Iteration: 9, HV: 0.0
Out: Iteration: 10, HV: 0.0
Out: Iteration: 11, HV: 0.0
Out: Iteration: 12, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 13, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 14, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 15, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 16, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 17, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 18, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 19, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 20, HV: 0.0
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: Iteration: 21, HV: 2.369795709893773
Out: Iteration: 22, HV: 18.248373417361595
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .')]
Trying again with a new set of initial conditions.
Out: Iteration: 23, HV: 20.99074342552727
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
Out: /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:652: RuntimeWarning:
Optimization failed in gen_candidates_scipy with the following warning(s):
[OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL: .')]
Trying again with a new set of initial conditions.
Out: Iteration: 24, HV: 21.15155359273342
Plot qNParEGO Pareto Frontier based on model posterior
The plotted points are samples from the fitted model's posterior, not observed samples.
frontier = compute_posterior_pareto_frontier(
experiment=parego_experiment,
data=parego_experiment.fetch_data(),
primary_objective=metric_b,
secondary_objective=metric_a,
absolute_metrics=["a", "b"],
num_points=20,
)
render(plot_pareto_frontier(frontier, CI_level=0.90))
To examine optimization process from another perspective, we plot the collected
observations under each algorithm where the color corresponds to the BO iteration at
which the point was collected. The plot on the right for qNEHVI shows that the
qNEHVI quickly identifies the Pareto frontier and most of its evaluations are very
close to the Pareto frontier. qNParEGO also identifies has many observations close to
the Pareto frontier, but relies on optimizing random scalarizations, which is a less
principled way of optimizing the Pareto front compared to qNEHVI, which explicitly
attempts focuses on improving the Pareto front. Sobol generates random points and has
few points close to the Pareto front.
import matplotlib
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.cm import ScalarMappable
%matplotlib inline
fig, axes = plt.subplots(1, 3, figsize=(20, 6))
algos = ["Sobol", "qNParEGO", "qNEHVI"]
outcomes_list = [sobol_outcomes, parego_outcomes, ehvi_outcomes]
cm = matplotlib.colormaps["viridis"]
BATCH_SIZE = 1
n_results = N_BATCH * BATCH_SIZE + N_INIT
batch_number = torch.cat(
[
torch.zeros(N_INIT),
torch.arange(1, N_BATCH + 1).repeat(BATCH_SIZE, 1).t().reshape(-1),
]
).numpy()
for i, train_obj in enumerate(outcomes_list):
x = i
sc = axes[x].scatter(
train_obj[:n_results, 0],
train_obj[:n_results, 1],
c=batch_number[:n_results],
alpha=0.8,
)
axes[x].set_title(algos[i])
axes[x].set_xlabel("Objective 1")
axes[x].set_xlim(-150, 5)
axes[x].set_ylim(-15, 0)
axes[0].set_ylabel("Objective 2")
norm = plt.Normalize(batch_number.min(), batch_number.max())
sm = ScalarMappable(norm=norm, cmap=cm)
sm.set_array([])
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.93, 0.15, 0.01, 0.7])
cbar = fig.colorbar(sm, cax=cbar_ax)
cbar.ax.set_title("Iteration")
Out: Text(0.5, 1.0, 'Iteration')

Hypervolume statistics
The hypervolume of the space dominated by points that dominate the reference point.
The plot below shows a common metric of multi-objective optimization performance when
the true Pareto frontier is known: the log difference between the hypervolume of the
true Pareto front and the hypervolume of the approximate Pareto front identified by each
algorithm. The log hypervolume difference is plotted at each step of the optimization
for each of the algorithms.
The plot show that qNEHVI vastly outperforms qNParEGO which outperforms the Sobol
baseline.
iters = np.arange(1, N_BATCH + 1)
log_hv_difference_sobol = np.log10(branin_currin.max_hv - np.asarray(sobol_hv_list))[
: N_BATCH + 1
]
log_hv_difference_parego = np.log10(branin_currin.max_hv - np.asarray(parego_hv_list))[
: N_BATCH + 1
]
log_hv_difference_ehvi = np.log10(branin_currin.max_hv - np.asarray(ehvi_hv_list))[
: N_BATCH + 1
]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
ax.plot(iters, log_hv_difference_sobol, label="Sobol", linewidth=1.5)
ax.plot(iters, log_hv_difference_parego, label="qNParEGO", linewidth=1.5)
ax.plot(iters, log_hv_difference_ehvi, label="qNEHVI", linewidth=1.5)
ax.set(
xlabel="number of observations (beyond initial points)",
ylabel="Log Hypervolume Difference",
)
ax.legend(loc="lower right")
Out: <matplotlib.legend.Legend at 0x7ff2b78b1df0>
