This tutorial uses synthetic functions to illustrate Bayesian optimization using a multi-task Gaussian Process in Ax. A typical use case is optimizing an expensive-to-evaluate (online) system with supporting (offline) simulations of that system.
Bayesian optimization with a multi-task kernel (Multi-task Bayesian optimization) is described by Swersky et al. (2013). Letham and Bakshy (2019) describe using multi-task Bayesian optimization to tune a ranking system with a mix of online and offline (simulator) experiments.
This tutorial produces the results of Online Appendix 2 from that paper.
The synthetic problem used here is to maximize the Hartmann 6 function, a classic optimization test problem in 6 dimensions. The objective is treated as unknown and are modeled with separate GPs. The objective is noisy.
Throughout the optimization we can make nosiy observations directly of the objective (an online observation), and we can make noisy observations of a biased version of the objective (offline observations). Bias is simulated by passing the function values through a piecewise linear function. Offline observations are much less time-consuming than online observations, so we wish to use them to improve our ability to optimize the online objective.
import os
import time
from copy import deepcopy
from typing import Optional
import numpy as np
import torch
from ax.core.data import Data
from ax.core.experiment import Experiment
from ax.core.generator_run import GeneratorRun
from ax.core.multi_type_experiment import MultiTypeExperiment
from ax.core.objective import Objective
from ax.core.observation import ObservationFeatures, observations_from_data
from ax.core.optimization_config import OptimizationConfig
from ax.core.parameter import ParameterType, RangeParameter
from ax.core.search_space import SearchSpace
from ax.metrics.hartmann6 import Hartmann6Metric
from ax.modelbridge.factory import get_sobol
from ax.modelbridge.registry import Models, MT_MTGP_trans, ST_MTGP_trans
from ax.modelbridge.torch import TorchModelBridge
from ax.modelbridge.transforms.convert_metric_names import tconfig_from_mt_experiment
from ax.plot.diagnostic import interact_batch_comparison
from ax.runners.synthetic import SyntheticRunner
from ax.utils.common.typeutils import checked_cast
from ax.utils.notebook.plotting import init_notebook_plotting, render
init_notebook_plotting()
[INFO 07-23 19:47:01] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
[INFO 07-23 19:47:01] ax.utils.notebook.plotting: Please see (https://ax.dev/tutorials/visualizations.html#Fix-for-plots-that-are-not-rendering) if visualizations are not rendering.
SMOKE_TEST = os.environ.get("SMOKE_TEST")
For this example, the online system is optimizing a Hartmann6 function. The Metric objects for these are directly imported above. We create analagous offline versions of this metrics which are identical but have a transform applied (a piecewise linear function). We construct Metric objects for each of them.
# Create metric with artificial offline bias, for the objective
# by passing the true values through a piecewise linear function.
class OfflineHartmann6Metric(Hartmann6Metric):
def f(self, x: np.ndarray) -> float:
raw_res = super().f(x)
m = -0.35
if raw_res < m:
return (1.5 * (raw_res - m)) + m
else:
return (6.0 * (raw_res - m)) + m
A MultiTypeExperiment is used for managing online and offline trials together. It is constructed in several steps:
Finally, because this is a synthetic benchmark problem where the true function values are known, we will also register metrics with the true (noiseless) function values for plotting below.
def get_experiment(include_true_metric=True):
noise_sd = 0.1 # Observations will have this much Normal noise added to them
# 1. Create simple search space for [0,1]^d, d=6
param_names = [f"x{i}" for i in range(6)]
parameters = [
RangeParameter(
name=param_names[i],
parameter_type=ParameterType.FLOAT,
lower=0.0,
upper=1.0,
)
for i in range(6)
]
search_space = SearchSpace(parameters=parameters)
# 2. Specify optimization config
online_objective = Hartmann6Metric(
"objective", param_names=param_names, noise_sd=noise_sd
)
opt_config = OptimizationConfig(
objective=Objective(online_objective, minimize=True)
)
# 3. Init experiment
exp = MultiTypeExperiment(
name="mt_exp",
search_space=search_space,
default_trial_type="online",
default_runner=SyntheticRunner(),
optimization_config=opt_config,
)
# 4. Establish offline trial_type, and how those trials are deployed
exp.add_trial_type("offline", SyntheticRunner())
# 5. Add offline metrics that provide biased estimates of the online metrics
offline_objective = OfflineHartmann6Metric(
"offline_objective", param_names=param_names, noise_sd=noise_sd
)
# Associate each offline metric with corresponding online metric
exp.add_tracking_metric(
metric=offline_objective, trial_type="offline", canonical_name="objective"
)
return exp
These figures compare the online measurements to the offline measurements on a random set of points, for the objective metric. You can see the offline measurements are biased but highly correlated. This produces Fig. S3 from the paper.
# Generate 50 points from a Sobol sequence
exp = get_experiment(include_true_metric=False)
s = get_sobol(exp.search_space, scramble=False)
gr = s.gen(50)
# Deploy them both online and offline
exp.new_batch_trial(trial_type="online", generator_run=gr).run()
exp.new_batch_trial(trial_type="offline", generator_run=gr).run()
# Fetch data
data = exp.fetch_data()
observations = observations_from_data(exp, data)
# Plot the arms in batch 0 (online) vs. batch 1 (offline)
render(interact_batch_comparison(observations, exp, 1, 0))
Here we construct a Bayesian optimization loop that interleaves online and offline batches. The loop defined here is described in Algorithm 1 of the paper. We compare multi-task Bayesian optimization to regular Bayesian optimization using only online observations.
Here we measure performance over 3 repetitions of the loop. Each one takes 1-2 hours so the whole benchmark run will take several hours to complete.
# Settings for the optimization benchmark.
# Number of repeated experiments, each with independent observation noise.
# This should be changed to 50 to reproduce the results from the paper.
if SMOKE_TEST:
n_batches = 1
n_init_online = 2
n_init_offline = 2
n_opt_online = 2
n_opt_offline = 2
else:
n_batches = 3 # Number of optimized BO batches
n_init_online = 5 # Size of the quasirandom initialization run online
n_init_offline = 20 # Size of the quasirandom initialization run offline
n_opt_online = 5 # Batch size for BO selected points to be run online
n_opt_offline = 20 # Batch size for BO selected to be run offline
For the online-only case, we run n_init_online
sobol points followed by n_batches
batches of n_opt_online
points selected by the GP. This is a normal Bayesian optimization loop.
# This function runs a Bayesian optimization loop, making online observations only.
def run_online_only_bo():
t1 = time.time()
### Do BO with online only
## Quasi-random initialization
exp_online = get_experiment()
m = get_sobol(exp_online.search_space, scramble=False)
gr = m.gen(n=n_init_online)
exp_online.new_batch_trial(trial_type="online", generator_run=gr).run()
## Do BO
for b in range(n_batches):
print("Online-only batch", b, time.time() - t1)
# Fit the GP
m = Models.BOTORCH_MODULAR(
experiment=exp_online,
data=exp_online.fetch_data(),
search_space=exp_online.search_space,
)
# Generate the new batch
gr = m.gen(
n=n_opt_online,
search_space=exp_online.search_space,
optimization_config=exp_online.optimization_config,
)
exp_online.new_batch_trial(trial_type="online", generator_run=gr).run()
Here we incorporate offline observations to accelerate the optimization, while using the same total number of online observations as in the loop above. The strategy here is that outlined in Algorithm 1 of the paper.
n_init_online
Sobol points online, and n_init_offline
Sobol points offline.n_opt_offline
candidates using NEI.n_opt_offline
candidates offline and observe their offline metrics.n_opt_online
of the NEI candidates, after incorporating their offline observations, and run them online.def get_MTGP(
experiment: Experiment,
data: Data,
search_space: Optional[SearchSpace] = None,
trial_index: Optional[int] = None,
device: torch.device = torch.device("cpu"),
dtype: torch.dtype = torch.double,
) -> TorchModelBridge:
"""Instantiates a Multi-task Gaussian Process (MTGP) model that generates
points with EI.
If the input experiment is a MultiTypeExperiment then a
Multi-type Multi-task GP model will be instantiated.
Otherwise, the model will be a Single-type Multi-task GP.
"""
if isinstance(experiment, MultiTypeExperiment):
trial_index_to_type = {
t.index: t.trial_type for t in experiment.trials.values()
}
transforms = MT_MTGP_trans
transform_configs = {
"TrialAsTask": {"trial_level_map": {"trial_type": trial_index_to_type}},
"ConvertMetricNames": tconfig_from_mt_experiment(experiment),
}
else:
# Set transforms for a Single-type MTGP model.
transforms = ST_MTGP_trans
transform_configs = None
# Choose the status quo features for the experiment from the selected trial.
# If trial_index is None, we will look for a status quo from the last
# experiment trial to use as a status quo for the experiment.
if trial_index is None:
trial_index = len(experiment.trials) - 1
elif trial_index >= len(experiment.trials):
raise ValueError("trial_index is bigger than the number of experiment trials")
status_quo = experiment.trials[trial_index].status_quo
if status_quo is None:
status_quo_features = None
else:
status_quo_features = ObservationFeatures(
parameters=status_quo.parameters,
trial_index=trial_index, # pyre-ignore[6]
)
return checked_cast(
TorchModelBridge,
Models.ST_MTGP(
experiment=experiment,
search_space=search_space or experiment.search_space,
data=data,
transforms=transforms,
transform_configs=transform_configs,
torch_dtype=dtype,
torch_device=device,
status_quo_features=status_quo_features,
),
)
# Online batches are constructed by selecting the maximum utility points from the offline
# batch, after updating the model with the offline results. This function selects the max utility points according
# to the MTGP predictions.
def max_utility_from_GP(n, m, experiment, search_space, gr):
obsf = []
for arm in gr.arms:
params = deepcopy(arm.parameters)
params["trial_type"] = "online"
obsf.append(ObservationFeatures(parameters=params))
# Make predictions
f, cov = m.predict(obsf)
# Compute expected utility
u = -np.array(f["objective"])
best_arm_indx = np.flip(np.argsort(u))[:n]
gr_new = GeneratorRun(
arms=[gr.arms[i] for i in best_arm_indx],
weights=[1.0] * n,
)
return gr_new
# This function runs a multi-task Bayesian optimization loop, as outlined in Algorithm 1 and above.
def run_mtbo():
t1 = time.time()
online_trials = []
## 1. Quasi-random initialization, online and offline
exp_multitask = get_experiment()
# Online points
m = get_sobol(exp_multitask.search_space, scramble=False)
gr = m.gen(
n=n_init_online,
)
tr = exp_multitask.new_batch_trial(trial_type="online", generator_run=gr)
tr.run()
online_trials.append(tr.index)
# Offline points
m = get_sobol(exp_multitask.search_space, scramble=False)
gr = m.gen(
n=n_init_offline,
)
exp_multitask.new_batch_trial(trial_type="offline", generator_run=gr).run()
## Do BO
for b in range(n_batches):
print("Multi-task batch", b, time.time() - t1)
# (2 / 7). Fit the MTGP
m = get_MTGP(
experiment=exp_multitask,
data=exp_multitask.fetch_data(),
search_space=exp_multitask.search_space,
)
# 3. Finding the best points for the online task
gr = m.gen(
n=n_opt_offline,
optimization_config=exp_multitask.optimization_config,
fixed_features=ObservationFeatures(
parameters={}, trial_index=online_trials[-1]
),
)
# 4. But launch them offline
exp_multitask.new_batch_trial(trial_type="offline", generator_run=gr).run()
# 5. Update the model
m = get_MTGP(
experiment=exp_multitask,
data=exp_multitask.fetch_data(),
search_space=exp_multitask.search_space,
)
# 6. Select max-utility points from the offline batch to generate an online batch
gr = max_utility_from_GP(
n=n_opt_online,
m=m,
experiment=exp_multitask,
search_space=exp_multitask.search_space,
gr=gr,
)
tr = exp_multitask.new_batch_trial(trial_type="online", generator_run=gr)
tr.run()
online_trials.append(tr.index)
Run both Bayesian optimization loops and aggregate results.
runners = {
"GP, online only": run_online_only_bo,
"MTGP": run_mtbo,
}
for k, r in runners.items():
r()
Online-only batch 0 0.0034761428833007812
Online-only batch 1 7.32835841178894
Online-only batch 2 15.6177077293396
Multi-task batch 0 0.008085250854492188
/tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values.
/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/linear_operator/utils/interpolation.py:71: UserWarning: torch.sparse.SparseTensor(indices, values, shape, *, device=) is deprecated. Please use torch.sparse_coo_tensor(indices, values, shape, dtype=, device=). (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:621.) /tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values. /opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/botorch/acquisition/cached_cholesky.py:87: RuntimeWarning: `cache_root` is only supported for GPyTorchModels that are not MultiTask models and don't produce a TransformedPosterior. Got a model of type <class 'botorch.models.model_list_gp_regression.ModelListGP'>. Setting `cache_root = False`.
/tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values.
Multi-task batch 1 55.20014667510986
/tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values.
/tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values. /opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/botorch/acquisition/cached_cholesky.py:87: RuntimeWarning: `cache_root` is only supported for GPyTorchModels that are not MultiTask models and don't produce a TransformedPosterior. Got a model of type <class 'botorch.models.model_list_gp_regression.ModelListGP'>. Setting `cache_root = False`.
/tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values.
Multi-task batch 2 251.67711067199707
/tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values.
/tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values. /opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/botorch/acquisition/cached_cholesky.py:87: RuntimeWarning: `cache_root` is only supported for GPyTorchModels that are not MultiTask models and don't produce a TransformedPosterior. Got a model of type <class 'botorch.models.model_list_gp_regression.ModelListGP'>. Setting `cache_root = False`.
/tmp/tmp.DL1QmpHQMI/Ax-main/ax/modelbridge/transforms/base.py:94: AxParameterWarning: Changing `is_ordered` to `True` for `ChoiceParameter` 'trial_type' since there are only two possible values.
Benjamin Letham and Eytan Bakshy. Bayesian optimization for policy search via online-offline experimentation. arXiv preprint arXiv:1603.09326, 2019.
Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task Bayesian optimization. In Advances in Neural Information Processing Systems 26, NIPS, pages 2004–2012, 2013.
Total runtime of script: 7 minutes, 30.25 seconds.