Loop API Example on Hartmann6
The loop API is the most lightweight way to do optimization in Ax. The user makes one
call to optimize
, which performs all of the optimization under the hood and returns
the optimized parameters.
For more customizability of the optimization procedure, consider the Service or
Developer API.
import sys
in_colab = 'google.colab' in sys.modules
if in_colab:
%pip install ax-platform
import numpy as np
from ax.metrics.branin import branin
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.managed_loop import optimize
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import init_notebook_plotting, render
import plotly.io as pio
init_notebook_plotting()
if in_colab:
pio.renderers.default = "colab"
Out: [INFO 02-03 18:30:12] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
Out: [INFO 02-03 18:30:12] ax.utils.notebook.plotting: Please see
(https://ax.dev/tutorials/visualizations.html#Fix-for-plots-that-are-not-rendering)
if visualizations are not rendering.
First, we define an evaluation function that is able to compute all the metrics needed
for this experiment. This function needs to accept a set of parameter values and can
also accept a weight. It should produce a dictionary of metric names to tuples of mean
and standard error for those metrics.
def hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}
If there is only one metric in the experiment – the objective – then evaluation function
can return a single tuple of mean and SEM, in which case Ax will assume that evaluation
corresponds to the objective. It can also return only the mean as a float, in which case
Ax will treat SEM as unknown and use a model that can infer it. For more details on
evaluation function, refer to the "Trial Evaluation" section in the docs.
The setup for the loop is fully compatible with JSON. The optimization algorithm is
selected based on the properties of the problem search space.
best_parameters, values, experiment, model = optimize(
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [0.0, 1.0],
"value_type": "float",
"log_scale": False,
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x3",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x4",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x5",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x6",
"type": "range",
"bounds": [0.0, 1.0],
},
],
experiment_name="test",
objective_name="hartmann6",
evaluation_function=hartmann_evaluation_function,
minimize=True,
parameter_constraints=["x1 + x2 <= 20"],
outcome_constraints=["l2norm <= 1.25"],
total_trials=30,
)
Out: [INFO 02-03 18:30:13] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:30:13] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:30:13] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:30:13] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:30:13] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:30:13] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[ParameterConstraint(1.0*x1 + 1.0*x2 <= 20.0)]).
Out: [INFO 02-03 18:30:13] ax.modelbridge.dispatch_utils: Using Models.BOTORCH_MODULAR since there is at least one ordered parameter and there are no unordered categorical parameters.
Out: [INFO 02-03 18:30:13] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=6 num_trials=None use_batch_trials=False
Out: [INFO 02-03 18:30:13] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=12
Out: [INFO 02-03 18:30:13] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=12
Out: [INFO 02-03 18:30:13] ax.modelbridge.dispatch_utils: verbose, disable_progbar, and jit_compile are not yet supported when using choose_generation_strategy with ModularBoTorchModel, dropping these arguments.
Out: [INFO 02-03 18:30:13] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+BoTorch', steps=[Sobol for 12 trials, BoTorch for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
Out: [INFO 02-03 18:30:13] ax.service.managed_loop: Started full optimization with 30 steps.
Out: [INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 1...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 2...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 3...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 4...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 5...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 6...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 7...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 8...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 9...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 10...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 11...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
Out: [INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 12...
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
Out: [INFO 02-03 18:30:13] ax.service.managed_loop: Running optimization trial 13...
Out: [INFO 02-03 18:30:25] ax.service.managed_loop: Running optimization trial 14...
Out: [INFO 02-03 18:30:34] ax.service.managed_loop: Running optimization trial 15...
Out: [INFO 02-03 18:30:45] ax.service.managed_loop: Running optimization trial 16...
Out: [INFO 02-03 18:31:01] ax.service.managed_loop: Running optimization trial 17...
Out: [INFO 02-03 18:31:13] ax.service.managed_loop: Running optimization trial 18...
Out: [INFO 02-03 18:31:25] ax.service.managed_loop: Running optimization trial 19...
Out: [INFO 02-03 18:31:41] ax.service.managed_loop: Running optimization trial 20...
Out: [INFO 02-03 18:31:59] ax.service.managed_loop: Running optimization trial 21...
Out: [INFO 02-03 18:32:10] ax.service.managed_loop: Running optimization trial 22...
Out: [INFO 02-03 18:32:23] ax.service.managed_loop: Running optimization trial 23...
Out: [INFO 02-03 18:32:32] ax.service.managed_loop: Running optimization trial 24...
Out: [INFO 02-03 18:32:41] ax.service.managed_loop: Running optimization trial 25...
Out: [INFO 02-03 18:32:48] ax.service.managed_loop: Running optimization trial 26...
Out: [INFO 02-03 18:32:57] ax.service.managed_loop: Running optimization trial 27...
Out: [INFO 02-03 18:33:04] ax.service.managed_loop: Running optimization trial 28...
Out: [INFO 02-03 18:33:10] ax.service.managed_loop: Running optimization trial 29...
Out: [INFO 02-03 18:33:17] ax.service.managed_loop: Running optimization trial 30...
And we can introspect optimization results:
Out: {'x1': 0.23567978731096606,
'x2': 0.0,
'x3': 0.3977677811660017,
'x4': 0.23318947211468297,
'x5': 0.2891941142712672,
'x6': 0.5978973574601063}
means, covariances = values
means
Out: {'hartmann6': -2.863688596444597, 'l2norm': 0.8422013805006452}
For comparison, minimum of Hartmann6 is:
Here we arbitrarily select "x1" and "x2" as the two parameters to plot for both metrics,
"hartmann6" and "l2norm".
render(plot_contour(model=model, param_x="x1", param_y="x2", metric_name="hartmann6"))
render(plot_contour(model=model, param_x="x1", param_y="x2", metric_name="l2norm"))
We also plot optimization trace, which shows best hartmann6 objective value seen by each
iteration of the optimization:
best_objectives = np.array(
[[trial.objective_mean for trial in experiment.trials.values()]]
)
best_objective_plot = optimization_trace_single_method(
y=np.minimum.accumulate(best_objectives, axis=1),
optimum=hartmann6.fmin,
title="Model performance vs. # of iterations",
ylabel="Hartmann6",
)
render(best_objective_plot)