Ask-tell Optimization with Ax
Complex optimization problems where we wish to tune multiple parameters to improve metric performance, but the inter-parameter interactions are not fully understood, are common across various fields including machine learning, robotics, materials science, and chemistry. This category of problem is known as "black-box" optimization. The complexity of black-box optimization problems further increases if evaluations are expensive to conduct, time-consuming, or noisy.
We can use Ax to efficiently conduct an experiment in which we "ask" for candidate
points to evaluate, "tell" Ax the results, and repeat. We'll uses Ax's Client
, a tool
for managing the state of our experiment, and we'll learn how to define an optimization
problem, configure an experiment, run trials, analyze results, and persist the
experiment for later use using the Client
.
Because Ax is a black box optimizer, we can use it to optimize any arbitrary function. In this example we will minimize the Hartmann6 function, a complicated 6-dimensional function with multiple local minima. Hartmann6 is a challenging benchmark for optimization algorithms commonly used in the global optimization literature -- it tests the algorithm's ability to identify the true global minimum, rather than mistakenly converging on a local minimum. Looking at its analytic form we can see that it would be incredibly challenging to efficiently find the global minimum either by manual trial-and-error or traditional design of experiments like grid-search or random-search.
where
Learning Objectives
- Understand the basic concepts of black box optimization
- Learn how to define an optimization problem using Ax
- Configure and run an experiment using Ax's
Client
- Analyze the results of the optimization
Prerequisites
- Familiarity with Python and basic programming concepts
- Understanding of adaptive experimentation and Bayesian optimization
Step 1: Import Necessary Modules
First, ensure you have all the necessary imports:
import numpy as np
from ax.preview.api.client import Client
from ax.preview.api.configs import (
ExperimentConfig,
RangeParameterConfig,
ParameterType,
)
Step 2: Initialize the Client
Create an instance of the Client
to manage the state of your experiment.
client = Client()
Step 3: Configure the Experiment
The Client
instance can be configured with a series of Config
s that define how the
experiment will be run.
The Hartmann6 problem is usually evaluated on the hypercube , so we will
define six identical RangeParameterConfig
s with these bounds and add these to an
ExperimentConfig
along with other metadata about the experiment.
You may specify additional features like parameter constraints to further refine the search space and parameter scaling to help navigate parameters with nonuniform effects. For more on configuring experiments, see this recipe.
# Define six float parameters x1, x2, x3, ... for the Hartmann6 function
parameters = [
RangeParameterConfig(
name=f"x{i + 1}", parameter_type=ParameterType.FLOAT, bounds=(0, 1)
)
for i in range(6)
]
# Create an experiment configuration
experiment_config = ExperimentConfig(
name="hartmann6_experiment",
parameters=parameters,
# The following arguments are optional
description="Optimization of the Hartmann6 function",
owner="developer",
)
# Apply the experiment configuration to the client
client.configure_experiment(experiment_config=experiment_config)
Step 4: Configure Optimization
Now, we must configure the objective for this optimization, which we do using
Client.configure_optimization
. This method expects a string objective
, an expression
containing either a single metric to maximize, a linear combination of metrics to
maximize, or a tuple of multiple metrics to jointly maximize. These expressions are
parsed using SymPy. For example:
"score"
would direct Ax to maximize a metric named score"-loss"
would direct Ax to Ax to minimize a metric named loss"task_0 + 0.5 * task_1"
would direct Ax to maximize the sum of two task scores, downweighting task_1 by a factor of 0.5"score, -flops"
would direct Ax to simultaneously maximize score while minimizing flops
For more information on configuring objectives and outcome constraints, see this recipe.
metric_name = "hartmann6" # this name is used during the optimization loop in Step 5
objective = f"-{metric_name}" # minimization is specified by the negative sign
client.configure_optimization(objective=objective)
Step 5: Run Trials
Here, we will configure the ask-tell loop.
We begin by defining the Hartmann6 function as written above. Remember, this is just an example problem and any Python function can be substituted here.
# Hartmann6 function
def hartmann6(x1, x2, x3, x4, x5, x6):
alpha = np.array([1.0, 1.2, 3.0, 3.2])
A = np.array([
[10, 3, 17, 3.5, 1.7, 8],
[0.05, 10, 17, 0.1, 8, 14],
[3, 3.5, 1.7, 10, 17, 8],
[17, 8, 0.05, 10, 0.1, 14]
])
P = 10**-4 * np.array([
[1312, 1696, 5569, 124, 8283, 5886],
[2329, 4135, 8307, 3736, 1004, 9991],
[2348, 1451, 3522, 2883, 3047, 6650],
[4047, 8828, 8732, 5743, 1091, 381]
])
outer = 0.0
for i in range(4):
inner = 0.0
for j, x in enumerate([x1, x2, x3, x4, x5, x6]):
inner += A[i, j] * (x - P[i, j])**2
outer += alpha[i] * np.exp(-inner)
return -outer
hartmann6(0.1, 0.45, 0.8, 0.25, 0.552, 1.0)
Optimization Loop
We will iteratively call client.get_next_trials
to "ask" Ax for a parameterization to
evaluate, then call hartmann6
using those parameters, and finally "tell" Ax the result
using client.complete_trial
.
This loop will run multiple trials to optimize the function.
# Number of trials to run
num_trials = 30
# Run trials
for _ in range(num_trials):
trials = client.get_next_trials(
maximum_trials=1
) # We will request just one trial at a time in this example
for trial_index, parameters in trials.items():
x1 = parameters["x1"]
x2 = parameters["x2"]
x3 = parameters["x3"]
x4 = parameters["x4"]
x5 = parameters["x5"]
x6 = parameters["x6"]
result = hartmann6(x1, x2, x3, x4, x5, x6)
# Set raw_data as a dictionary with metric names as keys and results as values
raw_data = {metric_name: result}
# Complete the trial with the result
client.complete_trial(trial_index=trial_index, raw_data=raw_data)
print(f"Completed trial {trial_index} with {raw_data=}")
Step 6: Analyze Results
After running trials, you can analyze the results. Most commonly this means extracting the parameterization from the best performing trial you conducted.
Hartmann6 has a known global minimum of at . Ax is able to identify a point very near to this true optimum using just 30 evaluations. This is possible due to the sample-efficiency of Bayesian optimization, the optimization method we use under the hood in Ax.
best_parameters, prediction, index, name = client.get_best_parameterization()
print("Best Parameters:", best_parameters)
print("Prediction (mean, variance):", prediction)
Step 7: Compute Analyses
Ax can also produce a number of analyses to help interpret the results of the experiment
via client.compute_analyses
. Users can manually select which analyses to run, or can
allow Ax to select which would be most relevant. In this case Ax selects the following:
- Parrellel Coordinates Plot shows which parameterizations were evaluated and what metric values were observed -- this is useful for getting a high level overview of how thoroughly the search space was explored and which regions tend to produce which outcomes
- Interaction Analysis Plot shows which parameters have the largest affect on the function and plots the most important parameters as 1 or 2 dimensional surfaces
- Summary lists all trials generated along with their parameterizations, observations, and miscellaneous metadata
client.compute_analyses(display=True) # By default Ax will display the AnalysisCards produced by compute_analyses
Parallel Coordinates for hartmann6
View arm parameterizations with their respective metric values
Interaction Analysis for hartmann6
Understand an Experiment's data as one- or two-dimensional additive components with sparsity. Important components are visualized through slice or contour plots
Summary for hartmann6_experiment
High-level summary of the Trial
-s in this Experiment
trial_index | arm_name | trial_status | generation_method | generation_node | hartmann6 | x1 | x2 | x3 | x4 | x5 | x6 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 0_0 | COMPLETED | Sobol | Sobol | -0.708423 | 0.173773 | 0.271171 | 0.506631 | 0.084857 | 0.839444 | 0.766637 |
1 | 1 | 1_0 | COMPLETED | Sobol | Sobol | -0.284925 | 0.61947 | 0.51478 | 0.388261 | 0.63222 | 0.274863 | 0.362562 |
2 | 2 | 2_0 | COMPLETED | Sobol | Sobol | -0.016405 | 0.933987 | 0.162711 | 0.939876 | 0.440781 | 0.064549 | 0.16991 |
3 | 3 | 3_0 | COMPLETED | Sobol | Sobol | -0.018774 | 0.366945 | 0.933662 | 0.071476 | 0.768418 | 0.50096 | 0.702846 |
4 | 4 | 4_0 | COMPLETED | Sobol | Sobol | -0.003359 | 0.436339 | 0.090697 | 0.328534 | 0.996154 | 0.694556 | 0.913037 |
5 | 5 | 5_0 | COMPLETED | BoTorch | MBM | -0.741382 | 0.20176 | 0.191007 | 0.43738 | 0 | 0.72696 | 0.503929 |
6 | 6 | 6_0 | COMPLETED | BoTorch | MBM | -0.062914 | 0.022045 | 0.632735 | 0.583252 | 0 | 0.651673 | 0.076503 |
7 | 7 | 7_0 | COMPLETED | BoTorch | MBM | -0.240771 | 0.152728 | 0.12617 | 0.416962 | 0 | 0.942111 | 0.952824 |
8 | 8 | 8_0 | COMPLETED | BoTorch | MBM | -0.90389 | 0.203533 | 0.291505 | 0.503375 | 0.070468 | 0.738616 | 0.569746 |
9 | 9 | 9_0 | COMPLETED | BoTorch | MBM | -0.612622 | 0.240901 | 0.41302 | 0.67961 | 0.175518 | 0.713751 | 0.596461 |
10 | 10 | 10_0 | COMPLETED | BoTorch | MBM | -0.498427 | 0.19876 | 0.32615 | 0.376029 | 0 | 0.681757 | 0.455087 |
11 | 11 | 11_0 | COMPLETED | BoTorch | MBM | -0.126429 | 1 | 0 | 1 | 0 | 0 | 1 |
12 | 12 | 12_0 | COMPLETED | BoTorch | MBM | -0.731956 | 0 | 0 | 0.538593 | 0 | 0.994263 | 0.577746 |
13 | 13 | 13_0 | COMPLETED | BoTorch | MBM | -1.14062 | 0.30873 | 0.087802 | 0.533907 | 0 | 0.51799 | 0.603244 |
14 | 14 | 14_0 | COMPLETED | BoTorch | MBM | -1.5728 | 0.23235 | 0 | 0.541902 | 0 | 0.262173 | 0.626407 |
15 | 15 | 15_0 | COMPLETED | BoTorch | MBM | -0.799088 | 0.108991 | 0 | 0.543367 | 0 | 0.077356 | 0.647915 |
16 | 16 | 16_0 | COMPLETED | BoTorch | MBM | -0.304512 | 0.24536 | 0 | 0.535075 | 0.050193 | 0.196283 | 0.185582 |
17 | 17 | 17_0 | COMPLETED | BoTorch | MBM | -1.34412 | 0.259164 | 0 | 0.518933 | 0 | 0.206976 | 0.696505 |
18 | 18 | 18_0 | COMPLETED | BoTorch | MBM | -1.62924 | 0.212458 | 0 | 0.558825 | 0 | 0.373392 | 0.625517 |
19 | 19 | 19_0 | COMPLETED | BoTorch | MBM | -1.52021 | 0.04632 | 0 | 0.58603 | 0 | 0.328551 | 0.665058 |
20 | 20 | 20_0 | COMPLETED | BoTorch | MBM | -1.51406 | 0.257781 | 0 | 0.629608 | 0 | 0.31181 | 0.64316 |
21 | 21 | 21_0 | COMPLETED | BoTorch | MBM | -1.12216 | 0.480905 | 0 | 0.512271 | 0 | 0.325684 | 0.62696 |
22 | 22 | 22_0 | COMPLETED | BoTorch | MBM | -2.33306 | 0.161325 | 0 | 0.550402 | 0.434149 | 0.339351 | 0.625884 |
23 | 23 | 23_0 | COMPLETED | BoTorch | MBM | -2.06767 | 0.133249 | 0 | 0.326702 | 0.452402 | 0.356438 | 0.620311 |
24 | 24 | 24_0 | COMPLETED | BoTorch | MBM | -1.30385 | 0.080485 | 0 | 0.556066 | 0.551158 | 0.36619 | 0.582391 |
25 | 25 | 25_0 | COMPLETED | BoTorch | MBM | -2.9574 | 0.20601 | 0 | 0.567308 | 0.302738 | 0.322898 | 0.646673 |
26 | 26 | 26_0 | COMPLETED | BoTorch | MBM | -2.09611 | 0.154304 | 0 | 0.782149 | 0.266463 | 0.363002 | 0.643348 |
27 | 27 | 27_0 | COMPLETED | BoTorch | MBM | -3.0744 | 0.208448 | 0 | 0.476936 | 0.273301 | 0.317399 | 0.659336 |
28 | 28 | 28_0 | COMPLETED | BoTorch | MBM | -2.86012 | 0.273073 | 0 | 0.472314 | 0.296434 | 0.305218 | 0.738664 |
29 | 29 | 29_0 | COMPLETED | BoTorch | MBM | -2.87617 | 0.229683 | 0 | 0.475138 | 0.282567 | 0.256997 | 0.607644 |
Cross Validation for hartmann6
Out-of-sample predictions using leave-one-out CV
Conclusion
This tutorial demonstrates how to use Ax's Client
for ask-tell optimization of Python
functions using the Hartmann6 function as an example. You can adjust the function and
parameters to suit your specific optimization problem.