Service API Example on Hartmann6
The Ax Service API is designed to allow the user to control scheduling of trials and
data computation while having an easy to use interface with Ax.
The user iteratively:
- Queries Ax for candidates
- Schedules / deploys them however they choose
- Computes data and logs to Ax
- Repeat
import sys
in_colab = 'google.colab' in sys.modules
if in_colab:
%pip install ax-platform
from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import init_notebook_plotting, render
import plotly.io as pio
init_notebook_plotting()
if in_colab:
pio.renderers.default = "colab"
Out: [INFO 02-03 18:27:30] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
Out: [INFO 02-03 18:27:30] ax.utils.notebook.plotting: Please see
(https://ax.dev/tutorials/visualizations.html#Fix-for-plots-that-are-not-rendering)
if visualizations are not rendering.
Create a client object to interface with Ax APIs. By default this runs locally without
storage.
Out: [INFO 02-03 18:27:31] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the verbose_logging argument to False. Note that float values in the logs are rounded to 6 decimal points.
An experiment consists of a search space (parameters and parameter constraints) and
optimization configuration (objectives and outcome constraints). Note that:
- Only
parameters
, and objectives
arguments are required.
- Dictionaries in
parameters
have the following required keys: "name" - parameter
name, "type" - parameter type ("range", "choice" or "fixed"), "bounds" for range
parameters, "values" for choice parameters, and "value" for fixed parameters.
- Dictionaries in
parameters
can optionally include "value_type" ("int", "float",
"bool" or "str"), "log_scale" flag for range parameters, and "is_ordered" flag for
choice parameters.
parameter_constraints
should be a list of strings of form "p1 >= p2" or "p1 + p2 <=
some_bound".
outcome_constraints
should be a list of strings of form "constrained_metric <=
some_bound".
ax_client.create_experiment(
name="hartmann_test_experiment",
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [0.0, 1.0],
"value_type": "float",
"log_scale": False,
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x3",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x4",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x5",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x6",
"type": "range",
"bounds": [0.0, 1.0],
},
],
objectives={"hartmann6": ObjectiveProperties(minimize=True)},
parameter_constraints=["x1 + x2 <= 2.0"],
outcome_constraints=["l2norm <= 1.25"],
)
Out: [INFO 02-03 18:27:31] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:27:31] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:27:31] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:27:31] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:27:31] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:27:31] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[ParameterConstraint(1.0*x1 + 1.0*x2 <= 2.0)]).
Out: [INFO 02-03 18:27:31] ax.modelbridge.dispatch_utils: Using Models.BOTORCH_MODULAR since there is at least one ordered parameter and there are no unordered categorical parameters.
Out: [INFO 02-03 18:27:31] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=6 num_trials=None use_batch_trials=False
Out: [INFO 02-03 18:27:31] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=12
Out: [INFO 02-03 18:27:31] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=12
Out: [INFO 02-03 18:27:31] ax.modelbridge.dispatch_utils: verbose, disable_progbar, and jit_compile are not yet supported when using choose_generation_strategy with ModularBoTorchModel, dropping these arguments.
Out: [INFO 02-03 18:27:31] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+BoTorch', steps=[Sobol for 12 trials, BoTorch for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
When using Ax a service, evaluation of parameterizations suggested by Ax is done either
locally or, more commonly, using an external scheduler. Below is a dummy evaluation
function that outputs data for two metrics "hartmann6" and "l2norm". Note that all
returned metrics correspond to either the objectives
set on experiment creation or the
metric names mentioned in outcome_constraints
.
import numpy as np
def evaluate(parameterization):
x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}
Result of the evaluation should generally be a mapping of the format:
\{metric_name -> (mean, SEM)\}
. If there is only one metric in the experiment – the
objective – then evaluation function can return a single tuple of mean and SEM, in which
case Ax will assume that evaluation corresponds to the objective. It can also return
only the mean as a float, in which case Ax will treat SEM as unknown and use a model
that can infer it.
For more details on evaluation function, refer to the "Trial Evaluation" section in the
Ax docs at ax.dev
With the experiment set up, we can start the optimization loop.
At each step, the user queries the client for a new trial then submits the evaluation of
that trial back to the client.
Note that Ax auto-selects an appropriate optimization algorithm based on the search
space. For more advance use cases that require a specific optimization algorithm, pass a
generation_strategy
argument into the AxClient
constructor. Note that when Bayesian
Optimization is used, generating new trials may take a few minutes.
for i in range(25):
parameterization, trial_index = ax_client.get_next_trial()
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:31] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.798191, 'x2': 0.218617, 'x3': 0.497973, 'x4': 0.80431, 'x5': 0.061824, 'x6': 0.177864} using model Sobol.
Out: [INFO 02-03 18:27:31] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.00733, 0.0), 'l2norm': (1.270926, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:31] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.400661, 'x2': 0.928884, 'x3': 0.906534, 'x4': 0.249076, 'x5': 0.579234, 'x6': 0.741028} using model Sobol.
Out: [INFO 02-03 18:27:31] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.062265, 0.0), 'l2norm': (1.670878, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:31] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.146923, 'x2': 0.341906, 'x3': 0.032908, 'x4': 0.56579, 'x5': 0.95748, 'x6': 0.81498} using model Sobol.
Out: [INFO 02-03 18:27:31] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-0.002507, 0.0), 'l2norm': (1.428513, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:31] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.559986, 'x2': 0.553638, 'x3': 0.621173, 'x4': 0.386532, 'x5': 0.431739, 'x6': 0.252335} using model Sobol.
Out: [INFO 02-03 18:27:31] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.53403, 0.0), 'l2norm': (1.185509, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:31] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.724246, 'x2': 0.443489, 'x3': 0.865864, 'x4': 0.363015, 'x5': 0.288655, 'x6': 0.04418} using model Sobol.
Out: [INFO 02-03 18:27:31] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.106443, 0.0), 'l2norm': (1.299226, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:31] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.076902, 'x2': 0.670721, 'x3': 0.292004, 'x4': 0.68278, 'x5': 0.82224, 'x6': 0.607253} using model Sobol.
Out: [INFO 02-03 18:27:31] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-0.034167, 0.0), 'l2norm': (1.432505, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:32] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.314982, 'x2': 0.070191, 'x3': 0.666972, 'x4': 0.007299, 'x5': 0.699883, 'x6': 0.962487} using model Sobol.
Out: [INFO 02-03 18:27:32] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (-0.23022, 0.0), 'l2norm': (1.401879, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:32] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.978109, 'x2': 0.79596, 'x3': 0.241565, 'x4': 0.953041, 'x5': 0.158944, 'x6': 0.399873} using model Sobol.
Out: [INFO 02-03 18:27:32] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-0.001034, 0.0), 'l2norm': (1.655915, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:32] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.906926, 'x2': 0.30459, 'x3': 0.742377, 'x4': 0.729013, 'x5': 0.561651, 'x6': 0.370201} using model Sobol.
Out: [INFO 02-03 18:27:32] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.013996, 0.0), 'l2norm': (1.565367, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:32] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.261408, 'x2': 0.592858, 'x3': 0.161711, 'x4': 0.283276, 'x5': 0.079397, 'x6': 0.808578} using model Sobol.
Out: [INFO 02-03 18:27:32] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.506462, 0.0), 'l2norm': (1.089179, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:32] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.007428, 'x2': 0.177886, 'x3': 0.785582, 'x4': 0.904546, 'x5': 0.457132, 'x6': 0.640374} using model Sobol.
Out: [INFO 02-03 18:27:32] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.083285, 0.0), 'l2norm': (1.44433, 0.0)}.
Out: /home/runner/work/Ax/Ax/ax/modelbridge/cross_validation.py:439: UserWarning:
Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 02-03 18:27:32] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.668475, 'x2': 0.96762, 'x3': 0.368933, 'x4': 0.084784, 'x5': 0.932093, 'x6': 0.202454} using model Sobol.
Out: [INFO 02-03 18:27:32] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (-0.054633, 0.0), 'l2norm': (1.560843, 0.0)}.
Out: [INFO 02-03 18:27:41] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.549601, 'x2': 0.531677, 'x3': 0.583668, 'x4': 0.429045, 'x5': 0.221704, 'x6': 0.149994} using model BoTorch.
Out: [INFO 02-03 18:27:41] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-0.677166, 0.0), 'l2norm': (1.086803, 0.0)}.
Out: [INFO 02-03 18:27:55] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.519101, 'x2': 0.347987, 'x3': 0.426261, 'x4': 0.523601, 'x5': 0.090173, 'x6': 0.067279} using model BoTorch.
Out: [INFO 02-03 18:27:55] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-0.280969, 0.0), 'l2norm': (0.926863, 0.0)}.
Out: [INFO 02-03 18:28:02] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.4611, 'x2': 0.551473, 'x3': 0.454544, 'x4': 0.395081, 'x5': 0.125136, 'x6': 0.266722} using model BoTorch.
Out: [INFO 02-03 18:28:02] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-0.661812, 0.0), 'l2norm': (0.982972, 0.0)}.
Out: [INFO 02-03 18:28:13] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.292441, 'x2': 0.54943, 'x3': 0.556684, 'x4': 0.546264, 'x5': 0.168335, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:28:13] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-1.049151, 0.0), 'l2norm': (1.011945, 0.0)}.
Out: [INFO 02-03 18:28:19] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.16969, 'x2': 0.539358, 'x3': 0.544712, 'x4': 0.586643, 'x5': 0.0, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:28:19] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-0.48139, 0.0), 'l2norm': (0.980083, 0.0)}.
Out: [INFO 02-03 18:28:25] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.280885, 'x2': 0.584454, 'x3': 0.568843, 'x4': 0.569263, 'x5': 0.252362, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:28:25] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-1.20064, 0.0), 'l2norm': (1.063866, 0.0)}.
Out: [INFO 02-03 18:28:34] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.27517, 'x2': 0.633495, 'x3': 0.443829, 'x4': 0.793032, 'x5': 0.258965, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:28:34] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-0.882782, 0.0), 'l2norm': (1.170462, 0.0)}.
Out: [INFO 02-03 18:28:42] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.226156, 'x2': 0.608379, 'x3': 0.361922, 'x4': 0.500869, 'x5': 0.286671, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:28:42] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-0.96128, 0.0), 'l2norm': (0.940908, 0.0)}.
Out: [INFO 02-03 18:28:50] ax.service.ax_client: Generated new trial 20 with parameters {'x1': 0.250656, 'x2': 0.637733, 'x3': 0.684167, 'x4': 0.485953, 'x5': 0.269442, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:28:50] ax.service.ax_client: Completed trial 20 with data: {'hartmann6': (-1.219301, 0.0), 'l2norm': (1.116408, 0.0)}.
Out: [INFO 02-03 18:29:03] ax.service.ax_client: Generated new trial 21 with parameters {'x1': 0.259181, 'x2': 0.601611, 'x3': 0.624611, 'x4': 0.518347, 'x5': 0.273354, 'x6': 0.468947} using model BoTorch.
Out: [INFO 02-03 18:29:03] ax.service.ax_client: Completed trial 21 with data: {'hartmann6': (-0.730203, 0.0), 'l2norm': (1.175826, 0.0)}.
Out: [INFO 02-03 18:29:14] ax.service.ax_client: Generated new trial 22 with parameters {'x1': 0.276542, 'x2': 0.595764, 'x3': 0.608971, 'x4': 0.449353, 'x5': 0.310219, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:29:14] ax.service.ax_client: Completed trial 22 with data: {'hartmann6': (-1.080186, 0.0), 'l2norm': (1.049004, 0.0)}.
Out: [INFO 02-03 18:29:37] ax.service.ax_client: Generated new trial 23 with parameters {'x1': 0.279588, 'x2': 0.683906, 'x3': 0.745572, 'x4': 0.583778, 'x5': 0.256304, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:29:37] ax.service.ax_client: Completed trial 23 with data: {'hartmann6': (-1.756293, 0.0), 'l2norm': (1.228113, 0.0)}.
Out: [INFO 02-03 18:29:45] ax.service.ax_client: Generated new trial 24 with parameters {'x1': 0.36705, 'x2': 0.751872, 'x3': 0.648214, 'x4': 0.611734, 'x5': 0.083371, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:29:45] ax.service.ax_client: Completed trial 24 with data: {'hartmann6': (-2.628991, 0.0), 'l2norm': (1.225311, 0.0)}.
By default, Ax restricts number of trials that can run in parallel for some optimization
stages, in order to improve the optimization performance and reduce the number of trials
that the optimization will require. To check the maximum parallelism for each
optimization stage:
ax_client.get_max_parallelism()
The output of this function is a list of tuples of form (number of trials, max
parallelism), so the example above means "the max parallelism is 12 for the first 12
trials and 3 for all subsequent trials." This is because the first 12 trials are
produced quasi-randomly and can all be evaluated at once, and subsequent trials are
produced via Bayesian optimization, which converges on optimal point in fewer trials
when parallelism is limited. MaxParallelismReachedException
indicates that the
parallelism limit has been reached –– refer to the 'Service API Exceptions Meaning and
Handling' section at the end of the tutorial for handling.
ax_client.generation_strategy.trials_as_df
Out: /tmp/ipykernel_2090/1060711394.py:1: DeprecationWarning:
GenerationStrategy.trials_as_df is deprecated and will be removed in a future release. Please use Experiment.to_df() instead.
| trial_index | arm_name | trial_status | generation_method | generation_node | l2norm | hartmann6 | x1 | x2 | x3 | x4 | x5 | x6 |
---|
0 | 0 | 0_0 | COMPLETED | Sobol | GenerationStep_0 | 1.27093 | -0.00733 | 0.798191 | 0.218617 | 0.497973 | 0.80431 | 0.061824 | 0.177864 |
1 | 1 | 1_0 | COMPLETED | Sobol | GenerationStep_0 | 1.67088 | -0.062265 | 0.400661 | 0.928884 | 0.906534 | 0.249076 | 0.579234 | 0.741028 |
2 | 2 | 2_0 | COMPLETED | Sobol | GenerationStep_0 | 1.42851 | -0.002507 | 0.146923 | 0.341906 | 0.032908 | 0.56579 | 0.95748 | 0.81498 |
3 | 3 | 3_0 | COMPLETED | Sobol | GenerationStep_0 | 1.18551 | -0.53403 | 0.559986 | 0.553638 | 0.621173 | 0.386532 | 0.431739 | 0.252335 |
4 | 4 | 4_0 | COMPLETED | Sobol | GenerationStep_0 | 1.29923 | -0.106443 | 0.724246 | 0.443489 | 0.865864 | 0.363015 | 0.288655 | 0.0441802 |
5 | 5 | 5_0 | COMPLETED | Sobol | GenerationStep_0 | 1.4325 | -0.034167 | 0.076902 | 0.670721 | 0.292004 | 0.68278 | 0.82224 | 0.607253 |
6 | 6 | 6_0 | COMPLETED | Sobol | GenerationStep_0 | 1.40188 | -0.23022 | 0.314982 | 0.070191 | 0.666972 | 0.007299 | 0.699883 | 0.962487 |
7 | 7 | 7_0 | COMPLETED | Sobol | GenerationStep_0 | 1.65592 | -0.001034 | 0.978109 | 0.79596 | 0.241565 | 0.953041 | 0.158944 | 0.399873 |
8 | 8 | 8_0 | COMPLETED | Sobol | GenerationStep_0 | 1.56537 | -0.013996 | 0.906926 | 0.30459 | 0.742377 | 0.729013 | 0.561651 | 0.370201 |
9 | 9 | 9_0 | COMPLETED | Sobol | GenerationStep_0 | 1.08918 | -0.506462 | 0.261408 | 0.592858 | 0.161711 | 0.283276 | 0.079397 | 0.808578 |
10 | 10 | 10_0 | COMPLETED | Sobol | GenerationStep_0 | 1.44433 | -0.083285 | 0.007428 | 0.177886 | 0.785582 | 0.904546 | 0.457132 | 0.640374 |
11 | 11 | 11_0 | COMPLETED | Sobol | GenerationStep_0 | 1.56084 | -0.054633 | 0.668475 | 0.96762 | 0.368933 | 0.084784 | 0.932093 | 0.202454 |
12 | 12 | 12_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.0868 | -0.677166 | 0.549601 | 0.531677 | 0.583668 | 0.429045 | 0.221704 | 0.149994 |
13 | 13 | 13_0 | COMPLETED | BoTorch | GenerationStep_1 | 0.926863 | -0.280969 | 0.519101 | 0.347987 | 0.426261 | 0.523601 | 0.090173 | 0.067279 |
14 | 14 | 14_0 | COMPLETED | BoTorch | GenerationStep_1 | 0.982972 | -0.661812 | 0.4611 | 0.551473 | 0.454544 | 0.395081 | 0.125136 | 0.266722 |
15 | 15 | 15_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.01195 | -1.04915 | 0.292441 | 0.54943 | 0.556684 | 0.546264 | 0.168335 | 0 |
16 | 16 | 16_0 | COMPLETED | BoTorch | GenerationStep_1 | 0.980083 | -0.48139 | 0.16969 | 0.539358 | 0.544712 | 0.586643 | 0 | 0 |
17 | 17 | 17_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.06387 | -1.20064 | 0.280885 | 0.584454 | 0.568843 | 0.569263 | 0.252362 | 6.78244e-16 |
18 | 18 | 18_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.17046 | -0.882782 | 0.27517 | 0.633495 | 0.443829 | 0.793032 | 0.258965 | 0 |
19 | 19 | 19_0 | COMPLETED | BoTorch | GenerationStep_1 | 0.940908 | -0.96128 | 0.226156 | 0.608379 | 0.361922 | 0.500869 | 0.286671 | 0 |
20 | 20 | 20_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.11641 | -1.2193 | 0.250656 | 0.637733 | 0.684167 | 0.485953 | 0.269442 | 0 |
21 | 21 | 21_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.17583 | -0.730203 | 0.259181 | 0.601611 | 0.624611 | 0.518347 | 0.273354 | 0.468947 |
22 | 22 | 22_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.049 | -1.08019 | 0.276542 | 0.595764 | 0.608971 | 0.449353 | 0.310219 | 1.4201e-12 |
23 | 23 | 23_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.22811 | -1.75629 | 0.279588 | 0.683906 | 0.745572 | 0.583778 | 0.256304 | 9.02783e-15 |
24 | 24 | 24_0 | COMPLETED | BoTorch | GenerationStep_1 | 1.22531 | -2.62899 | 0.36705 | 0.751872 | 0.648214 | 0.611734 | 0.083371 | 0 |
Once it's complete, we can access the best parameters found, as well as the
corresponding metric values.
best_parameters, values = ax_client.get_best_parameters()
best_parameters
Out: {'x1': 0.3670503268425431,
'x2': 0.751871541975133,
'x3': 0.6482137463762045,
'x4': 0.611734352744967,
'x5': 0.08337142786995012,
'x6': 0.0}
means, covariances = values
means
Out: {'hartmann6': -2.628985000986, 'l2norm': 1.2253104072969554}
For comparison, Hartmann6 minimum:
Here we arbitrarily select "x1" and "x2" as the two parameters to plot for both metrics,
"hartmann6" and "l2norm".
render(ax_client.get_contour_plot())
Out: [INFO 02-03 18:29:46] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
We can also retrieve a contour plot for the other metric, "l2norm" –– say, we are
interested in seeing the response surface for parameters "x3" and "x4" for this one.
render(ax_client.get_contour_plot(param_x="x3", param_y="x4", metric_name="l2norm"))
Out: [INFO 02-03 18:29:48] ax.service.ax_client: Retrieving contour plot with parameter 'x3' on X-axis and 'x4' on Y-axis, for metric 'l2norm'. Remaining parameters are affixed to the middle of their range.
Here we plot the optimization trace, showing the progression of finding the point with
the optimal objective:
render(
ax_client.get_optimization_trace(objective_optimum=hartmann6.fmin)
)
We can serialize the state of optimization to JSON and save it to a .json
file or save
it to the SQL backend. For the former:
ax_client.save_to_json_file()
Out: [INFO 02-03 18:29:49] ax.service.ax_client: Saved JSON-serialized state of optimization to ax_client_snapshot.json.
restored_ax_client = (
AxClient.load_from_json_file()
)
Out: [INFO 02-03 18:29:50] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the verbose_logging argument to False. Note that float values in the logs are rounded to 6 decimal points.
To store state of optimization to an SQL backend, first follow
setup instructions on Ax website.
Having set up the SQL backend, pass DBSettings
to AxClient
on instantiation (note
that SQLAlchemy
dependency will have to be installed – for installation, refer to
optional dependencies on
Ax website):
from ax.storage.sqa_store.structs import DBSettings
db_settings = DBSettings(url="sqlite:///foo.db")
new_ax = AxClient(db_settings=db_settings)
Out: [INFO 02-03 18:29:51] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the verbose_logging argument to False. Note that float values in the logs are rounded to 6 decimal points.
When valid DBSettings
are passed into AxClient
, a unique experiment name is a
required argument (name
) to ax_client.create_experiment
. The state of the
optimization is auto-saved any time it changes (i.e. a new trial is added or
completed, etc).
To reload an optimization state later, instantiate AxClient
with the same DBSettings
and use ax_client.load_experiment_from_database(experiment_name="my_experiment")
.
Special Cases
Evaluation failure: should any optimization iterations fail during evaluation,
log_trial_failure
will ensure that the same trial is not proposed again.
_, trial_index = ax_client.get_next_trial()
ax_client.log_trial_failure(trial_index=trial_index)
Out: [INFO 02-03 18:30:02] ax.service.ax_client: Generated new trial 25 with parameters {'x1': 0.438931, 'x2': 0.822518, 'x3': 0.488551, 'x4': 0.621579, 'x5': 0.0, 'x6': 0.0} using model BoTorch.
Out: [INFO 02-03 18:30:02] ax.service.ax_client: Registered failure of trial 25.
Adding custom trials: should there be need to evaluate a specific parameterization,
attach_trial
will add it to the experiment.
ax_client.attach_trial(
parameters={"x1": 0.9, "x2": 0.9, "x3": 0.9, "x4": 0.9, "x5": 0.9, "x6": 0.9}
)
Out: [INFO 02-03 18:30:03] ax.core.experiment: Attached custom parameterizations [{'x1': 0.9, 'x2': 0.9, 'x3': 0.9, 'x4': 0.9, 'x5': 0.9, 'x6': 0.9}] as trial 26.
Out: ({'x1': 0.9, 'x2': 0.9, 'x3': 0.9, 'x4': 0.9, 'x5': 0.9, 'x6': 0.9}, 26)
Need to run many trials in parallel: for optimal results and optimization
efficiency, we strongly recommend sequential optimization (generating a few trials, then
waiting for them to be completed with evaluation data). However, if your use case needs
to dispatch many trials in parallel before they are updated with data and you are
running into the "All trials for current model have been generated, but not enough data
has been observed to fit next model" error, instantiate AxClient
as
AxClient(enforce_sequential_optimization=False)
.
Nonlinear parameter constraints and/or constraints on non-Range parameters: Ax
parameter constraints can currently only support linear inequalities
(discussion). Users may be able to simulate
this functionality, however, by substituting the following evaluate
function for that
defined in section 3 above.
def evaluate(parameterization):
x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
l2norm = np.sqrt((x**2).sum())
if l2norm > 1.25:
return {"l2norm": (l2norm, 0.0)}
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (l2norm, 0.0)}
For this to work, the constraint quantity (l2norm
in this case) should have a
corresponding outcome constraint on the experiment. See the outcome_constraint arg to
ax_client.create_experiment in section 2 above for how to specify outcome constraints.
This setup accomplishes the following:
- Allows computation of an arbitrarily complex constraint value.
- Skips objective computation when the constraint is violated, useful when the
objective is relatively expensive to compute.
- Constraint metric values are returned even when there is a violation. This helps the
model understand + avoid constraint violations.
Service API Exceptions Meaning and Handling
DataRequiredError
:
Ax generation strategy needs to be updated with more data to proceed to the next
optimization model. When the optimization moves from initialization stage to the
Bayesian optimization stage, the underlying BayesOpt model needs sufficient data to
train. For optimal results and optimization efficiency (finding the optimal point in the
least number of trials), we recommend sequential optimization (generating a few trials,
then waiting for them to be completed with evaluation data). Therefore, the correct way
to handle this exception is to wait until more trial evaluations complete and log their
data via ax_client.complete_trial(...)
.
However, if there is strong need to generate more trials before more data is available,
instantiate AxClient
as AxClient(enforce_sequential_optimization=False)
. With this
setting, as many trials will be generated from the initialization stage as requested,
and the optimization will move to the BayesOpt stage whenever enough trials are
completed.
MaxParallelismReachedException
:
generation strategy restricts the number of trials that can be ran simultaneously (to
encourage sequential optimization), and the parallelism limit has been reached. The
correct way to handle this exception is the same as DataRequiredError
– to wait until
more trial evluations complete and log their data via ax_client.complete_trial(...)
.
In some cases higher parallelism is important, so
enforce_sequential_optimization=False
kwarg to AxClient allows to suppress limiting of
parallelism. It's also possible to override the default parallelism setting for all
stages of the optimization by passing choose_generation_strategy_kwargs
to
ax_client.create_experiment
:
ax_client = AxClient()
ax_client.create_experiment(
parameters=[
{"name": "x", "type": "range", "bounds": [-5.0, 10.0]},
{"name": "y", "type": "range", "bounds": [0.0, 15.0]},
],
choose_generation_strategy_kwargs={"max_parallelism_override": 10},
)
Out: [INFO 02-03 18:30:05] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the verbose_logging argument to False. Note that float values in the logs are rounded to 6 decimal points.
Out: [INFO 02-03 18:30:05] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:30:05] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter y. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
Out: [INFO 02-03 18:30:05] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x', parameter_type=FLOAT, range=[-5.0, 10.0]), RangeParameter(name='y', parameter_type=FLOAT, range=[0.0, 15.0])], parameter_constraints=[]).
Out: [INFO 02-03 18:30:05] ax.modelbridge.dispatch_utils: Using Models.BOTORCH_MODULAR since there is at least one ordered parameter and there are no unordered categorical parameters.
Out: [INFO 02-03 18:30:05] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=2 num_trials=None use_batch_trials=False
Out: [INFO 02-03 18:30:05] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=5
Out: [INFO 02-03 18:30:05] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=5
Out: [INFO 02-03 18:30:05] ax.modelbridge.dispatch_utils: verbose, disable_progbar, and jit_compile are not yet supported when using choose_generation_strategy with ModularBoTorchModel, dropping these arguments.
Out: [INFO 02-03 18:30:05] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+BoTorch', steps=[Sobol for 5 trials, BoTorch for subsequent trials]). Iterations after 5 will take longer to generate due to model-fitting.
ax_client.get_max_parallelism()