This tutorial illustrates the core visualization utilities available in Ax.
import numpy as np
from ax.service.ax_client import AxClient
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import(
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 12-16 16:45:15] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x ** 2).sum()) + noise2, noise_sd)
}
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objective_name="hartmann6",
minimize=True,
outcome_constraints=["l2norm <= 1.25"]
)
[INFO 12-16 16:45:15] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points. [INFO 12-16 16:45:15] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 12-16 16:45:15] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 12-16 16:45:15] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 12-16 16:45:15] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 12-16 16:45:15] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 12-16 16:45:15] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 12-16 16:45:15] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]). [INFO 12-16 16:45:15] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters. [INFO 12-16 16:45:15] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 12 trials, GPEI for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters))
[INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.732697, 'x2': 0.115858, 'x3': 0.90607, 'x4': 0.628305, 'x5': 0.310364, 'x6': 0.685915}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.372856, 0.1), 'l2norm': (1.57252, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.702573, 'x2': 0.336146, 'x3': 0.963131, 'x4': 0.329113, 'x5': 0.032429, 'x6': 0.847193}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.922581, 0.1), 'l2norm': (1.587003, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.892862, 'x2': 0.181469, 'x3': 0.685355, 'x4': 0.568963, 'x5': 0.668424, 'x6': 0.597251}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-0.177051, 0.1), 'l2norm': (1.550515, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.288062, 'x2': 0.813712, 'x3': 0.440585, 'x4': 0.197883, 'x5': 0.905678, 'x6': 0.222252}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.265073, 0.1), 'l2norm': (1.312058, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.385225, 'x2': 0.349598, 'x3': 0.769829, 'x4': 0.629262, 'x5': 0.483992, 'x6': 0.218816}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.330339, 0.1), 'l2norm': (1.335786, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.325731, 'x2': 0.961827, 'x3': 0.829169, 'x4': 0.328541, 'x5': 0.078198, 'x6': 0.442173}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-0.153244, 0.1), 'l2norm': (1.169233, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.448079, 'x2': 0.627927, 'x3': 0.296622, 'x4': 0.853469, 'x5': 0.364472, 'x6': 0.900446}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (0.120383, 0.1), 'l2norm': (1.666346, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.950143, 'x2': 0.901927, 'x3': 0.332985, 'x4': 0.747994, 'x5': 0.614743, 'x6': 0.737894}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-0.122127, 0.1), 'l2norm': (1.704237, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.542961, 'x2': 0.322079, 'x3': 0.853211, 'x4': 0.017247, 'x5': 0.299357, 'x6': 0.237837}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.277466, 0.1), 'l2norm': (1.054665, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.115535, 'x2': 0.27838, 'x3': 0.176231, 'x4': 0.483773, 'x5': 0.829505, 'x6': 0.689815}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.14747, 0.1), 'l2norm': (1.218596, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.229557, 'x2': 0.587641, 'x3': 0.669538, 'x4': 0.78712, 'x5': 0.077289, 'x6': 0.987973}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.514787, 0.1), 'l2norm': (1.625957, 0.1)}. [INFO 12-16 16:45:15] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.098837, 'x2': 0.783709, 'x3': 0.552222, 'x4': 0.594313, 'x5': 0.497611, 'x6': 0.395472}. [INFO 12-16 16:45:15] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (-0.336342, 0.1), 'l2norm': (1.2078, 0.1)}. [WARNING 12-16 16:45:15] ax.utils.common.kwargs: `<class 'ax.modelbridge.torch.TorchModelBridge'>` expected argument `transform_configs` to be of type typing.Union[typing.Dict[str, typing.Dict[str, typing.Union[int, float, str, botorch.acquisition.acquisition.AcquisitionFunction, typing.Dict[str, typing.Any], NoneType]]], NoneType]. Got {'Winsorize': {'optimization_config': OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)])}} (type: <class 'dict'>). [INFO 12-16 16:45:22] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.555652, 'x2': 0.377294, 'x3': 0.956449, 'x4': 0.164879, 'x5': 0.054476, 'x6': 0.49315}. [INFO 12-16 16:45:22] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-0.406058, 0.1), 'l2norm': (1.143878, 0.1)}. [WARNING 12-16 16:45:22] ax.utils.common.kwargs: `<class 'ax.modelbridge.torch.TorchModelBridge'>` expected argument `transform_configs` to be of type typing.Union[typing.Dict[str, typing.Dict[str, typing.Union[int, float, str, botorch.acquisition.acquisition.AcquisitionFunction, typing.Dict[str, typing.Any], NoneType]]], NoneType]. Got {'Winsorize': {'optimization_config': OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)])}} (type: <class 'dict'>). [INFO 12-16 16:45:33] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.584579, 'x2': 0.351524, 'x3': 0.966099, 'x4': 0.134619, 'x5': 0.038552, 'x6': 0.628778}. [INFO 12-16 16:45:33] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-0.288947, 0.1), 'l2norm': (1.280895, 0.1)}. [WARNING 12-16 16:45:33] ax.utils.common.kwargs: `<class 'ax.modelbridge.torch.TorchModelBridge'>` expected argument `transform_configs` to be of type typing.Union[typing.Dict[str, typing.Dict[str, typing.Union[int, float, str, botorch.acquisition.acquisition.AcquisitionFunction, typing.Dict[str, typing.Any], NoneType]]], NoneType]. Got {'Winsorize': {'optimization_config': OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)])}} (type: <class 'dict'>). [INFO 12-16 16:45:51] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.578411, 'x2': 0.379219, 'x3': 0.952559, 'x4': 0.320838, 'x5': 0.062097, 'x6': 0.471065}. [INFO 12-16 16:45:51] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-0.327313, 0.1), 'l2norm': (1.406943, 0.1)}. [WARNING 12-16 16:45:51] ax.utils.common.kwargs: `<class 'ax.modelbridge.torch.TorchModelBridge'>` expected argument `transform_configs` to be of type typing.Union[typing.Dict[str, typing.Dict[str, typing.Union[int, float, str, botorch.acquisition.acquisition.AcquisitionFunction, typing.Dict[str, typing.Any], NoneType]]], NoneType]. Got {'Winsorize': {'optimization_config': OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)])}} (type: <class 'dict'>). [INFO 12-16 16:45:59] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.549744, 'x2': 0.401018, 'x3': 0.941658, 'x4': 0.018227, 'x5': 0.063415, 'x6': 0.861041}. [INFO 12-16 16:45:59] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-1.00794, 0.1), 'l2norm': (1.424505, 0.1)}. [WARNING 12-16 16:45:59] ax.utils.common.kwargs: `<class 'ax.modelbridge.torch.TorchModelBridge'>` expected argument `transform_configs` to be of type typing.Union[typing.Dict[str, typing.Dict[str, typing.Union[int, float, str, botorch.acquisition.acquisition.AcquisitionFunction, typing.Dict[str, typing.Any], NoneType]]], NoneType]. Got {'Winsorize': {'optimization_config': OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)])}} (type: <class 'dict'>). [INFO 12-16 16:46:21] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.300785, 'x2': 0.454691, 'x3': 0.930215, 'x4': 0.0, 'x5': 0.103044, 'x6': 0.815261}. [INFO 12-16 16:46:21] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-0.821036, 0.1), 'l2norm': (1.586237, 0.1)}. [WARNING 12-16 16:46:21] ax.utils.common.kwargs: `<class 'ax.modelbridge.torch.TorchModelBridge'>` expected argument `transform_configs` to be of type typing.Union[typing.Dict[str, typing.Dict[str, typing.Union[int, float, str, botorch.acquisition.acquisition.AcquisitionFunction, typing.Dict[str, typing.Any], NoneType]]], NoneType]. Got {'Winsorize': {'optimization_config': OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)])}} (type: <class 'dict'>). [INFO 12-16 16:46:57] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.865585, 'x2': 0.423287, 'x3': 0.950031, 'x4': 0.0, 'x5': 0.059751, 'x6': 0.808675}. [INFO 12-16 16:46:57] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-0.757595, 0.1), 'l2norm': (1.656341, 0.1)}. [WARNING 12-16 16:46:57] ax.utils.common.kwargs: `<class 'ax.modelbridge.torch.TorchModelBridge'>` expected argument `transform_configs` to be of type typing.Union[typing.Dict[str, typing.Dict[str, typing.Union[int, float, str, botorch.acquisition.acquisition.AcquisitionFunction, typing.Dict[str, typing.Any], NoneType]]], NoneType]. Got {'Winsorize': {'optimization_config': OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)])}} (type: <class 'dict'>). [INFO 12-16 16:48:28] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.517597, 'x2': 0.224207, 'x3': 1.0, 'x4': 0.0, 'x5': 0.0, 'x6': 0.354584}. [INFO 12-16 16:48:28] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (0.04599, 0.1), 'l2norm': (1.244478, 0.1)}. [WARNING 12-16 16:48:28] ax.utils.common.kwargs: `<class 'ax.modelbridge.torch.TorchModelBridge'>` expected argument `transform_configs` to be of type typing.Union[typing.Dict[str, typing.Dict[str, typing.Union[int, float, str, botorch.acquisition.acquisition.AcquisitionFunction, typing.Dict[str, typing.Any], NoneType]]], NoneType]. Got {'Winsorize': {'optimization_config': OptimizationConfig(objective=Objective(metric_name="hartmann6", minimize=True), outcome_constraints=[OutcomeConstraint(l2norm <= 1.25)])}} (type: <class 'dict'>). [INFO 12-16 16:49:11] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.512534, 'x2': 0.407387, 'x3': 0.772404, 'x4': 0.042557, 'x5': 0.373151, 'x6': 0.067874}. [INFO 12-16 16:49:11] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-0.089589, 0.1), 'l2norm': (1.012237, 0.1)}.
The plot below shows the response surface for hartmann6
metric as a function of the x1
, x2
parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name='hartmann6'))
[INFO 12-16 16:49:11] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
The plot below allows toggling between different pairs of parameters to view the contours.
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name='hartmann6'))
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
render(plot_objective_vs_constraints(model, 'hartmann6', rel=False))
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
render(plot_slice(model, "x2", "hartmann6"))
Tile plots are useful for viewing the effect of each arm.
render(interact_fitted(model, rel=False))
Total runtime of script: 4 minutes, 12.06 seconds.