This tutorial illustrates the core visualization utilities available in Ax.
import numpy as np
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import interact_fitted, plot_objective_vs_constraints, tile_fitted
from ax.plot.slice import plot_slice
from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import init_notebook_plotting, render
init_notebook_plotting()
[ERROR 11-12 05:15:48] ax.storage.sqa_store.encoder: ATTENTION: The Ax team is considering deprecating SQLAlchemy storage. If you are currently using SQLAlchemy storage, please reach out to us via GitHub Issues here: https://github.com/facebook/Ax/issues/2975
[INFO 11-12 05:15:48] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
[INFO 11-12 05:15:48] ax.utils.notebook.plotting: Please see (https://ax.dev/tutorials/visualizations.html#Fix-for-plots-that-are-not-rendering) if visualizations are not rendering.
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x**2).sum()) + noise2, noise_sd),
}
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objectives={"hartmann6": ObjectiveProperties(minimize=True)},
outcome_constraints=["l2norm <= 1.25"],
)
[INFO 11-12 05:15:49] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points.
[INFO 11-12 05:15:49] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-12 05:15:49] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-12 05:15:49] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-12 05:15:49] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-12 05:15:49] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-12 05:15:49] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-12 05:15:49] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]).
[INFO 11-12 05:15:49] ax.modelbridge.dispatch_utils: Using Models.BOTORCH_MODULAR since there is at least one ordered parameter and there are no unordered categorical parameters.
[INFO 11-12 05:15:49] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=6 num_trials=None use_batch_trials=False
[INFO 11-12 05:15:49] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=12
[INFO 11-12 05:15:49] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=12
[INFO 11-12 05:15:49] ax.modelbridge.dispatch_utils: `verbose`, `disable_progbar`, and `jit_compile` are not yet supported when using `choose_generation_strategy` with ModularBoTorchModel, dropping these arguments.
[INFO 11-12 05:15:49] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+BoTorch', steps=[Sobol for 12 trials, BoTorch for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(
trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters)
)
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.200243, 'x2': 0.260506, 'x3': 0.357524, 'x4': 0.103258, 'x5': 0.051367, 'x6': 0.619721} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.770303, 0.1), 'l2norm': (0.697039, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction.
[INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.892723, 'x2': 0.911812, 'x3': 0.749771, 'x4': 0.783036, 'x5': 0.54273, 'x6': 0.409426} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (0.166265, 0.1), 'l2norm': (1.815038, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.550309, 'x2': 0.063357, 'x3': 0.183234, 'x4': 0.313989, 'x5': 0.959163, 'x6': 0.111741} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (0.161742, 0.1), 'l2norm': (1.255013, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.356132, 'x2': 0.73395, 'x3': 0.791476, 'x4': 0.509701, 'x5': 0.446741, 'x6': 0.917679} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.414206, 0.1), 'l2norm': (1.643427, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.467498, 'x2': 0.125299, 'x3': 0.563633, 'x4': 0.738269, 'x5': 0.259351, 'x6': 0.74698} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.396737, 0.1), 'l2norm': (1.194768, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.62658, 'x2': 0.546137, 'x3': 0.454041, 'x4': 0.433182, 'x5': 0.772262, 'x6': 0.286316} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-0.297686, 0.1), 'l2norm': (1.336189, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.781493, 'x2': 0.447681, 'x3': 0.895089, 'x4': 0.964624, 'x5': 0.72963, 'x6': 0.238389} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (0.003227, 0.1), 'l2norm': (1.799069, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.124107, 'x2': 0.848743, 'x3': 0.005171, 'x4': 0.144221, 'x5': 0.238758, 'x6': 0.794694} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-0.20509, 0.1), 'l2norm': (1.227002, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.001621, 'x2': 0.03352, 'x3': 0.954683, 'x4': 0.461877, 'x5': 0.581404, 'x6': 0.970959} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.224297, 0.1), 'l2norm': (1.539656, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.842291, 'x2': 0.637829, 'x3': 0.06275, 'x4': 0.641231, 'x5': 0.074232, 'x6': 0.058207} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.161342, 0.1), 'l2norm': (1.559089, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.749149, 'x2': 0.351996, 'x3': 0.504043, 'x4': 0.235156, 'x5': 0.423755, 'x6': 0.479212} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.757355, 0.1), 'l2norm': (1.01549, 0.1)}.
/tmp/tmp.Lx6ya87xsF/Ax-main/ax/modelbridge/cross_validation.py:464: UserWarning: Encountered exception in computing model fit quality: RandomModelBridge does not support prediction. [INFO 11-12 05:15:49] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.406286, 'x2': 0.944342, 'x3': 0.396465, 'x4': 0.930314, 'x5': 0.92061, 'x6': 0.550227} using model Sobol.
[INFO 11-12 05:15:49] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (0.034019, 0.1), 'l2norm': (1.903237, 0.1)}.
[INFO 11-12 05:15:53] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.762705, 'x2': 0.38798, 'x3': 0.552349, 'x4': 0.050685, 'x5': 0.257122, 'x6': 0.645096} using model BoTorch.
[INFO 11-12 05:15:53] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-0.679016, 0.1), 'l2norm': (1.108299, 0.1)}.
[INFO 11-12 05:15:58] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.0, 'x2': 0.380768, 'x3': 0.581211, 'x4': 0.114904, 'x5': 0.0, 'x6': 0.513387} using model BoTorch.
[INFO 11-12 05:15:58] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-0.517415, 0.1), 'l2norm': (0.901229, 0.1)}.
[INFO 11-12 05:16:05] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.587183, 'x2': 0.297486, 'x3': 0.158054, 'x4': 0.163677, 'x5': 0.534745, 'x6': 0.62313} using model BoTorch.
[INFO 11-12 05:16:05] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-0.599215, 0.1), 'l2norm': (1.180618, 0.1)}.
[INFO 11-12 05:16:08] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.527272, 'x2': 0.339144, 'x3': 0.281972, 'x4': 0.239847, 'x5': 0.232961, 'x6': 0.965389} using model BoTorch.
[INFO 11-12 05:16:08] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-0.630695, 0.1), 'l2norm': (1.362404, 0.1)}.
[INFO 11-12 05:16:11] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.910389, 'x2': 0.211778, 'x3': 0.0, 'x4': 0.103108, 'x5': 0.128957, 'x6': 0.45539} using model BoTorch.
[INFO 11-12 05:16:11] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-0.316723, 0.1), 'l2norm': (1.073981, 0.1)}.
[INFO 11-12 05:16:15] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.649362, 'x2': 0.36804, 'x3': 0.425017, 'x4': 0.300711, 'x5': 0.020129, 'x6': 0.71948} using model BoTorch.
[INFO 11-12 05:16:15] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-0.310613, 0.1), 'l2norm': (1.101382, 0.1)}.
[INFO 11-12 05:16:17] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.340539, 'x2': 0.338784, 'x3': 0.665463, 'x4': 0.034621, 'x5': 0.331674, 'x6': 0.468354} using model BoTorch.
[INFO 11-12 05:16:17] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-1.175641, 0.1), 'l2norm': (1.115424, 0.1)}.
[INFO 11-12 05:16:21] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.267274, 'x2': 0.323513, 'x3': 0.648786, 'x4': 0.0, 'x5': 0.338758, 'x6': 0.225955} using model BoTorch.
[INFO 11-12 05:16:21] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-0.336025, 0.1), 'l2norm': (0.774353, 0.1)}.
The plot below shows the response surface for hartmann6
metric as a function of the x1
, x2
parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name="hartmann6"))
[INFO 11-12 05:16:21] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
The plot below allows toggling between different pairs of parameters to view the contours.
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name="hartmann6"))
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
render(plot_objective_vs_constraints(model, "hartmann6", rel=False))
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
render(plot_slice(model, "x2", "hartmann6"))
Tile plots are useful for viewing the effect of each arm.
render(interact_fitted(model, rel=False))
In certain environments like Google Colab or remote setups, plots may not render. If this is the case, we recommend using the below workaround which overrides the default renderer in plotly. The below cell changes the renderer to "jupyterlab" for this tutorial, but you can find the right renderer for your use case by calling pio.renderers
import plotly.io as pio
pio.renderers.default = "jupyterlab"
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name="hartmann6"))
[INFO 11-12 05:16:48] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
Total runtime of script: 1 minutes, 7.19 seconds.