This tutorial illustrates the core visualization utilities available in Ax.
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import (
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 08-11 11:49:39] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x**2).sum()) + noise2, noise_sd),
}
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objectives={"hartmann6": ObjectiveProperties(minimize=True)},
outcome_constraints=["l2norm <= 1.25"],
)
[INFO 08-11 11:49:40] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points. [INFO 08-11 11:49:40] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-11 11:49:40] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-11 11:49:40] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-11 11:49:40] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-11 11:49:40] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-11 11:49:40] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-11 11:49:40] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]). [INFO 08-11 11:49:40] ax.modelbridge.dispatch_utils: Using Models.GPEI since there are more ordered parameters than there are categories for the unordered categorical parameters. [INFO 08-11 11:49:40] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=6 num_trials=None use_batch_trials=False [INFO 08-11 11:49:40] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=12 [INFO 08-11 11:49:40] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=12 [INFO 08-11 11:49:40] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 12 trials, GPEI for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(
trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters)
)
[INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.959061, 'x2': 0.421097, 'x3': 0.693045, 'x4': 0.394199, 'x5': 0.937822, 'x6': 0.651645}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (0.040888, 0.1), 'l2norm': (1.849496, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.442236, 'x2': 0.450873, 'x3': 0.061154, 'x4': 0.802372, 'x5': 0.223707, 'x6': 0.536141}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.055425, 0.1), 'l2norm': (1.12866, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.921714, 'x2': 0.006742, 'x3': 0.18159, 'x4': 0.41122, 'x5': 0.030623, 'x6': 0.600647}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-0.241457, 0.1), 'l2norm': (1.189034, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.404707, 'x2': 0.802856, 'x3': 0.487908, 'x4': 0.594826, 'x5': 0.447605, 'x6': 0.076676}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-2.941819, 0.1), 'l2norm': (1.283768, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.295047, 'x2': 0.488927, 'x3': 0.894945, 'x4': 0.382528, 'x5': 0.480947, 'x6': 0.079408}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.453999, 0.1), 'l2norm': (1.266909, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.3332, 'x2': 0.885637, 'x3': 0.073471, 'x4': 0.077522, 'x5': 0.867887, 'x6': 0.965997}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (0.014857, 0.1), 'l2norm': (1.672911, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.695888, 'x2': 0.819337, 'x3': 0.838895, 'x4': 0.155663, 'x5': 0.83838, 'x6': 0.954818}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (0.082659, 0.1), 'l2norm': (1.96107, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.258997, 'x2': 0.964524, 'x3': 0.560169, 'x4': 0.060604, 'x5': 0.905849, 'x6': 0.262307}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-0.292789, 0.1), 'l2norm': (1.485219, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.539608, 'x2': 0.013607, 'x3': 0.39022, 'x4': 0.540406, 'x5': 0.894388, 'x6': 0.614211}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.007765, 0.1), 'l2norm': (1.32753, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.15475, 'x2': 0.417481, 'x3': 0.271358, 'x4': 0.50263, 'x5': 0.94393, 'x6': 0.379732}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.376055, 0.1), 'l2norm': (1.170875, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.531038, 'x2': 0.995568, 'x3': 0.926524, 'x4': 0.966679, 'x5': 0.272159, 'x6': 0.307865}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.311272, 0.1), 'l2norm': (1.893716, 0.1)}. [INFO 08-11 11:49:40] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.86786, 'x2': 0.608454, 'x3': 0.655593, 'x4': 0.127577, 'x5': 0.611836, 'x6': 0.9546}. [INFO 08-11 11:49:40] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (-0.098855, 0.1), 'l2norm': (1.673908, 0.1)}. [INFO 08-11 11:49:50] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.319713, 'x2': 0.677944, 'x3': 0.445753, 'x4': 0.553277, 'x5': 0.490442, 'x6': 0.112938}. [INFO 08-11 11:49:50] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-1.7292, 0.1), 'l2norm': (1.085585, 0.1)}. [INFO 08-11 11:50:13] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.331355, 'x2': 0.778461, 'x3': 0.43235, 'x4': 0.543415, 'x5': 0.480897, 'x6': 0.091974}. [INFO 08-11 11:50:13] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-2.481291, 0.1), 'l2norm': (1.288739, 0.1)}. [INFO 08-11 11:50:34] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.441385, 'x2': 0.749986, 'x3': 0.411011, 'x4': 0.626722, 'x5': 0.450572, 'x6': 0.085772}. [INFO 08-11 11:50:34] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-2.497182, 0.1), 'l2norm': (1.267041, 0.1)}. [INFO 08-11 11:50:55] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.323854, 'x2': 0.766789, 'x3': 0.525732, 'x4': 0.613687, 'x5': 0.439902, 'x6': 0.068391}. [INFO 08-11 11:50:55] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-2.528026, 0.1), 'l2norm': (1.048959, 0.1)}. [INFO 08-11 11:51:14] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.354954, 'x2': 0.82938, 'x3': 0.486524, 'x4': 0.612547, 'x5': 0.445268, 'x6': 0.020166}. [INFO 08-11 11:51:14] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-2.897653, 0.1), 'l2norm': (1.033244, 0.1)}. [INFO 08-11 11:51:24] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.399697, 'x2': 0.85098, 'x3': 0.479693, 'x4': 0.593301, 'x5': 0.410735, 'x6': 0.040845}. [INFO 08-11 11:51:24] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-3.136699, 0.1), 'l2norm': (1.140757, 0.1)}. [INFO 08-11 11:51:36] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.44363, 'x2': 0.88217, 'x3': 0.520244, 'x4': 0.578702, 'x5': 0.449428, 'x6': 0.02675}. [INFO 08-11 11:51:36] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-2.861125, 0.1), 'l2norm': (1.419748, 0.1)}. [INFO 08-11 11:51:49] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.376834, 'x2': 0.864701, 'x3': 0.449668, 'x4': 0.6294, 'x5': 0.38359, 'x6': 0.074089}. [INFO 08-11 11:51:49] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-2.967483, 0.1), 'l2norm': (1.262153, 0.1)}.
The plot below shows the response surface for hartmann6
metric as a function of the x1
, x2
parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name="hartmann6"))
[INFO 08-11 11:51:49] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
The plot below allows toggling between different pairs of parameters to view the contours.
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name="hartmann6"))
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
render(plot_objective_vs_constraints(model, "hartmann6", rel=False))
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
render(plot_slice(model, "x2", "hartmann6"))
Tile plots are useful for viewing the effect of each arm.
render(interact_fitted(model, rel=False))
Total runtime of script: 3 minutes, 9.36 seconds.