The loop API is the most lightweight way to do optimization in Ax. The user makes one call to optimize
, which performs all of the optimization under the hood and returns the optimized parameters.
For more customizability of the optimization procedure, consider the Service or Developer API.
import numpy as np
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.managed_loop import optimize
from ax.metrics.branin import branin
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 10-01 15:33:46] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
First, we define an evaluation function that is able to compute all the metrics needed for this experiment. This function needs to accept a set of parameter values and can also accept a weight. It should produce a dictionary of metric names to tuples of mean and standard error for those metrics.
def hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
# In our case, standard error is 0, since we are computing a synthetic function.
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x ** 2).sum()), 0.0)}
If there is only one metric in the experiment – the objective – then evaluation function can return a single tuple of mean and SEM, in which case Ax will assume that evaluation corresponds to the objective. It can also return only the mean as a float, in which case Ax will treat SEM as unknown and use a model that can infer it. For more details on evaluation function, refer to the "Trial Evaluation" section in the docs.
The setup for the loop is fully compatible with JSON. The optimization algorithm is selected based on the properties of the problem search space.
best_parameters, values, experiment, model = optimize(
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [0.0, 1.0],
"value_type": "float", # Optional, defaults to inference from type of "bounds".
"log_scale": False, # Optional, defaults to False.
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x3",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x4",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x5",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x6",
"type": "range",
"bounds": [0.0, 1.0],
},
],
experiment_name="test",
objective_name="hartmann6",
evaluation_function=hartmann_evaluation_function,
minimize=True, # Optional, defaults to False.
parameter_constraints=["x1 + x2 <= 20"], # Optional.
outcome_constraints=["l2norm <= 1.25"], # Optional.
total_trials=30, # Optional.
)
[INFO 10-01 15:33:46] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 10-01 15:33:46] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 10-01 15:33:46] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 10-01 15:33:46] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 10-01 15:33:46] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 10-01 15:33:46] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 6 trials, GPEI for subsequent trials]). Iterations after 6 will take longer to generate due to model-fitting. [INFO 10-01 15:33:46] ax.service.managed_loop: Started full optimization with 30 steps. [INFO 10-01 15:33:46] ax.service.managed_loop: Running optimization trial 1... [INFO 10-01 15:33:46] ax.service.managed_loop: Running optimization trial 2... [INFO 10-01 15:33:46] ax.service.managed_loop: Running optimization trial 3... [INFO 10-01 15:33:46] ax.service.managed_loop: Running optimization trial 4... [INFO 10-01 15:33:46] ax.service.managed_loop: Running optimization trial 5... [INFO 10-01 15:33:46] ax.service.managed_loop: Running optimization trial 6... [INFO 10-01 15:33:46] ax.service.managed_loop: Running optimization trial 7... /home/travis/build/facebook/Ax/ax/models/torch/utils.py:274: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.) /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:33:52] ax.service.managed_loop: Running optimization trial 8... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:33:57] ax.service.managed_loop: Running optimization trial 9... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:01] ax.service.managed_loop: Running optimization trial 10... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:05] ax.service.managed_loop: Running optimization trial 11... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:09] ax.service.managed_loop: Running optimization trial 12... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:13] ax.service.managed_loop: Running optimization trial 13... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:17] ax.service.managed_loop: Running optimization trial 14... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:21] ax.service.managed_loop: Running optimization trial 15... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:25] ax.service.managed_loop: Running optimization trial 16... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:29] ax.service.managed_loop: Running optimization trial 17... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:33] ax.service.managed_loop: Running optimization trial 18... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:38] ax.service.managed_loop: Running optimization trial 19... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:41] ax.service.managed_loop: Running optimization trial 20... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:44] ax.service.managed_loop: Running optimization trial 21... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:45] ax.service.managed_loop: Running optimization trial 22... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:49] ax.service.managed_loop: Running optimization trial 23... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:50] ax.service.managed_loop: Running optimization trial 24... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:51] ax.service.managed_loop: Running optimization trial 25... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:55] ax.service.managed_loop: Running optimization trial 26... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:34:59] ax.service.managed_loop: Running optimization trial 27... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:35:03] ax.service.managed_loop: Running optimization trial 28... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:35:06] ax.service.managed_loop: Running optimization trial 29... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). [INFO 10-01 15:35:10] ax.service.managed_loop: Running optimization trial 30... /home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
And we can introspect optimization results:
best_parameters
{'x1': 0.13885218555350543, 'x2': 0.1895580979279669, 'x3': 0.5334107177939718, 'x4': 6.101103775164642e-17, 'x5': 0.36076409245282803, 'x6': 0.6357817796984754}
means, covariances = values
means
{'l2norm': 0.934937777293862, 'hartmann6': -1.8081028481672667}
For comparison, minimum of Hartmann6 is:
hartmann6.fmin
-3.32237
Here we arbitrarily select "x1" and "x2" as the two parameters to plot for both metrics, "hartmann6" and "l2norm".
render(plot_contour(model=model, param_x='x1', param_y='x2', metric_name='hartmann6'))
/home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
render(plot_contour(model=model, param_x='x1', param_y='x2', metric_name='l2norm'))
/home/travis/build/facebook/Ax/ax/modelbridge/torch.py:311: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
We also plot optimization trace, which shows best hartmann6 objective value seen by each iteration of the optimization:
# `plot_single_method` expects a 2-d array of means, because it expects to average means from multiple
# optimization runs, so we wrap out best objectives array in another array.
best_objectives = np.array([[trial.objective_mean for trial in experiment.trials.values()]])
best_objective_plot = optimization_trace_single_method(
y=np.minimum.accumulate(best_objectives, axis=1),
optimum=hartmann6.fmin,
title="Model performance vs. # of iterations",
ylabel="Hartmann6",
)
render(best_objective_plot)
Total runtime of script: 1 minutes, 33.52 seconds.