Skip to main content
Version: Next

Ask-tell Optimization with Ax

Complex optimization problems where we wish to tune multiple parameters to improve metric performance, but the inter-parameter interactions are not fully understood, are common across various fields including machine learning, robotics, materials science, and chemistry. This category of problem is known as "black-box" optimization. The complexity of black-box optimization problems further increases if evaluations are expensive to conduct, time-consuming, or noisy.

We can use Ax to efficiently conduct an experiment in which we "ask" for candidate points to evaluate, "tell" Ax the results, and repeat. We'll uses Ax's Client, a tool for managing the state of our experiment, and we'll learn how to define an optimization problem, configure an experiment, run trials, analyze results, and persist the experiment for later use using the Client.

Because Ax is a black box optimizer, we can use it to optimize any arbitrary function. In this example we will minimize the Hartmann6 function, a complicated 6-dimensional function with multiple local minima. Hartmann6 is a challenging benchmark for optimization algorithms commonly used in the global optimization literature -- it tests the algorithm's ability to identify the true global minimum, rather than mistakenly converging on a local minimum. Looking at its analytic form we can see that it would be incredibly challenging to efficiently find the global minimum either by manual trial-and-error or traditional design of experiments like grid-search or random-search.

f(x)=i=14αiexp(j=16Aij(xjPij)2) f(\mathbf{x})=-\sum_{i=1}^4 \alpha_i \exp \left(-\sum_{j=1}^6 A_{i j}\left(x_j-P_{i j}\right)^2\right)

where

α=(1.0,1.2,3.0,3.2)T \alpha=(1.0,1.2,3.0,3.2)^T A=(103173.501.780.0510170.181433.51.7101781780.05100.114) \mathbf{A}=\left(\begin{array}{cccccc}10 & 3 & 17 & 3.50 & 1.7 & 8 \\ 0.05 & 10 & 17 & 0.1 & 8 & 14 \\ 3 & 3.5 & 1.7 & 10 & 17 & 8 \\ 17 & 8 & 0.05 & 10 & 0.1 & 14\end{array}\right) P=104(1312169655691248283588623294135830737361004999123481451352228833047665040478828873257431091381) \mathbf{P}=10^{-4}\left(\begin{array}{cccccc}1312 & 1696 & 5569 & 124 & 8283 & 5886 \\ 2329 & 4135 & 8307 & 3736 & 1004 & 9991 \\ 2348 & 1451 & 3522 & 2883 & 3047 & 6650 \\ 4047 & 8828 & 8732 & 5743 & 1091 & 381\end{array}\right)

Learning Objectives

  • Understand the basic concepts of black box optimization
  • Learn how to define an optimization problem using Ax
  • Configure and run an experiment using Ax's Client
  • Analyze the results of the optimization

Prerequisites

Step 1: Import Necessary Modules

First, ensure you have all the necessary imports:

import numpy as np
from ax.api.client import Client
from ax.api.configs import (
ExperimentConfig,
RangeParameterConfig,
ParameterType,
)
Output:
/opt/hostedtoolcache/Python/3.12.10/x64/lib/python3.12/site-packages/pyro/ops/stats.py:527: SyntaxWarning: invalid escape sequence 'g'
we have :math:ES^{*}(P,Q) ge ES^{*}(Q,Q) with equality holding if and only if :math:P=Q, i.e.

Step 2: Initialize the Client

Create an instance of the Client to manage the state of your experiment.

client = Client()

Step 3: Configure the Experiment

The Client instance can be configured with a series of Configs that define how the experiment will be run.

The Hartmann6 problem is usually evaluated on the hypercube xi(0,1)x_i \in (0, 1), so we will define six identical RangeParameterConfigs with these bounds and add these to an ExperimentConfig along with other metadata about the experiment.

You may specify additional features like parameter constraints to further refine the search space and parameter scaling to help navigate parameters with nonuniform effects. For more on configuring experiments, see this recipe.

# Define six float parameters x1, x2, x3, ... for the Hartmann6 function
parameters = [
RangeParameterConfig(
name=f"x{i + 1}", parameter_type=ParameterType.FLOAT, bounds=(0, 1)
)
for i in range(6)
]

# Create an experiment configuration
experiment_config = ExperimentConfig(
name="hartmann6_experiment",
parameters=parameters,
# The following arguments are optional
description="Optimization of the Hartmann6 function",
owner="developer",
)

# Apply the experiment configuration to the client
client.configure_experiment(experiment_config=experiment_config)

Step 4: Configure Optimization

Now, we must configure the objective for this optimization, which we do using Client.configure_optimization. This method expects a string objective, an expression containing either a single metric to maximize, a linear combination of metrics to maximize, or a tuple of multiple metrics to jointly maximize. These expressions are parsed using SymPy. For example:

  • "score" would direct Ax to maximize a metric named score
  • "-loss" would direct Ax to Ax to minimize a metric named loss
  • "task_0 + 0.5 * task_1" would direct Ax to maximize the sum of two task scores, downweighting task_1 by a factor of 0.5
  • "score, -flops" would direct Ax to simultaneously maximize score while minimizing flops

For more information on configuring objectives and outcome constraints, see this recipe.

metric_name = "hartmann6" # this name is used during the optimization loop in Step 5
objective = f"-{metric_name}" # minimization is specified by the negative sign

client.configure_optimization(objective=objective)

Step 5: Run Trials

Here, we will configure the ask-tell loop.

We begin by defining the Hartmann6 function as written above. Remember, this is just an example problem and any Python function can be substituted here.

# Hartmann6 function
def hartmann6(x1, x2, x3, x4, x5, x6):
alpha = np.array([1.0, 1.2, 3.0, 3.2])
A = np.array([
[10, 3, 17, 3.5, 1.7, 8],
[0.05, 10, 17, 0.1, 8, 14],
[3, 3.5, 1.7, 10, 17, 8],
[17, 8, 0.05, 10, 0.1, 14]
])
P = 10**-4 * np.array([
[1312, 1696, 5569, 124, 8283, 5886],
[2329, 4135, 8307, 3736, 1004, 9991],
[2348, 1451, 3522, 2883, 3047, 6650],
[4047, 8828, 8732, 5743, 1091, 381]
])

outer = 0.0
for i in range(4):
inner = 0.0
for j, x in enumerate([x1, x2, x3, x4, x5, x6]):
inner += A[i, j] * (x - P[i, j])**2
outer += alpha[i] * np.exp(-inner)
return -outer

hartmann6(0.1, 0.45, 0.8, 0.25, 0.552, 1.0)
Output:
np.float64(-0.4878737485613134)

Optimization Loop

We will iteratively call client.get_next_trials to "ask" Ax for a parameterization to evaluate, then call hartmann6 using those parameters, and finally "tell" Ax the result using client.complete_trial.

This loop will run multiple trials to optimize the function.

# Number of trials to run
num_trials = 30

# Run trials
for _ in range(num_trials):
trials = client.get_next_trials(
maximum_trials=1
) # We will request just one trial at a time in this example
for trial_index, parameters in trials.items():
x1 = parameters["x1"]
x2 = parameters["x2"]
x3 = parameters["x3"]
x4 = parameters["x4"]
x5 = parameters["x5"]
x6 = parameters["x6"]

result = hartmann6(x1, x2, x3, x4, x5, x6)

# Set raw_data as a dictionary with metric names as keys and results as values

raw_data = {metric_name: result}

# Complete the trial with the result

client.complete_trial(trial_index=trial_index, raw_data=raw_data)
print(f"Completed trial {trial_index} with {raw_data=}")
Output:
Completed trial 0 with raw_data={'hartmann6': np.float64(-0.5019255343509779)}
Completed trial 1 with raw_data={'hartmann6': np.float64(-0.022448780563003885)}
Completed trial 2 with raw_data={'hartmann6': np.float64(-0.14519093834066654)}
Completed trial 3 with raw_data={'hartmann6': np.float64(-0.8604693413805216)}
Completed trial 4 with raw_data={'hartmann6': np.float64(-0.04463788653477111)}
Completed trial 5 with raw_data={'hartmann6': np.float64(-1.53747420257552)}
Completed trial 6 with raw_data={'hartmann6': np.float64(-0.7746285626008993)}
Completed trial 7 with raw_data={'hartmann6': np.float64(-0.5958143119835694)}
Completed trial 8 with raw_data={'hartmann6': np.float64(-0.9180207256799746)}
Completed trial 9 with raw_data={'hartmann6': np.float64(-1.4571186548903279)}
Completed trial 10 with raw_data={'hartmann6': np.float64(-0.5652020510015644)}
Completed trial 11 with raw_data={'hartmann6': np.float64(-1.9025798308645765)}
Completed trial 12 with raw_data={'hartmann6': np.float64(-2.4023918423159127)}
Completed trial 13 with raw_data={'hartmann6': np.float64(-2.667845217834489)}
Completed trial 14 with raw_data={'hartmann6': np.float64(-1.734596106152918)}
Completed trial 15 with raw_data={'hartmann6': np.float64(-2.203282961813707)}
Completed trial 16 with raw_data={'hartmann6': np.float64(-2.9661849718642617)}
Completed trial 17 with raw_data={'hartmann6': np.float64(-2.204289273292293)}
Completed trial 18 with raw_data={'hartmann6': np.float64(-2.453902532362328)}
Completed trial 19 with raw_data={'hartmann6': np.float64(-2.5879260020462516)}
Completed trial 20 with raw_data={'hartmann6': np.float64(-3.034029178842838)}
Completed trial 21 with raw_data={'hartmann6': np.float64(-0.5740518442706214)}
Completed trial 22 with raw_data={'hartmann6': np.float64(-3.253454881380015)}
Completed trial 23 with raw_data={'hartmann6': np.float64(-2.8468072715625494)}
Completed trial 24 with raw_data={'hartmann6': np.float64(-3.213619091963154)}
Completed trial 25 with raw_data={'hartmann6': np.float64(-3.2123294457451803)}
Completed trial 26 with raw_data={'hartmann6': np.float64(-3.2824154473972658)}
Completed trial 27 with raw_data={'hartmann6': np.float64(-3.1691049255409105)}
Completed trial 28 with raw_data={'hartmann6': np.float64(-3.299233158827855)}
Completed trial 29 with raw_data={'hartmann6': np.float64(-3.177834274334158)}

Step 6: Analyze Results

After running trials, you can analyze the results. Most commonly this means extracting the parameterization from the best performing trial you conducted.

Hartmann6 has a known global minimum of f(x)=3.322f(x*) = -3.322 at x=(0.201,0.150,0.477,0.273,0.312,0.657)x* = (0.201, 0.150, 0.477, 0.273, 0.312, 0.657). Ax is able to identify a point very near to this true optimum using just 30 evaluations. This is possible due to the sample-efficiency of Bayesian optimization, the optimization method we use under the hood in Ax.

best_parameters, prediction, index, name = client.get_best_parameterization()
print("Best Parameters:", best_parameters)
print("Prediction (mean, variance):", prediction)
Output:
Best Parameters: {'x1': 0.21188346451138274, 'x2': 0.16428117335904663, 'x3': 0.4341952205574083, 'x4': 0.25907492071763133, 'x5': 0.29839289904146926, 'x6': 0.6470518604430703}
Prediction (mean, variance): {'hartmann6': (np.float64(-3.2758915184784447), np.float64(0.00119121385765093))}

Step 7: Compute Analyses

Ax can also produce a number of analyses to help interpret the results of the experiment via client.compute_analyses. Users can manually select which analyses to run, or can allow Ax to select which would be most relevant. In this case Ax selects the following:

  • Parrellel Coordinates Plot shows which parameterizations were evaluated and what metric values were observed -- this is useful for getting a high level overview of how thoroughly the search space was explored and which regions tend to produce which outcomes
  • Interaction Analysis Plot shows which parameters have the largest affect on the function and plots the most important parameters as 1 or 2 dimensional surfaces
  • Summary lists all trials generated along with their parameterizations, observations, and miscellaneous metadata
# display=True instructs Ax to sort then render the resulting analyses
cards = client.compute_analyses(display=True)

Parallel Coordinates for hartmann6

The parallel coordinates plot displays multi-dimensional data by representing each parameter as a parallel axis. This plot helps in assessing how thoroughly the search space has been explored and in identifying patterns or clusterings associated with high-performing (good) or low-performing (bad) arms. By tracing lines across the axes, one can observe correlations and interactions between parameters, gaining insights into the relationships that contribute to the success or failure of different configurations within the experiment.

loading...

Summary for hartmann6_experiment

High-level summary of the Trial-s in this Experiment

trial_indexarm_nametrial_statusgeneration_nodehartmann6x1x2x3x4x5x6
000_0COMPLETEDSobol-0.5019260.2213520.5056570.7914360.8889810.4064190.956151
111_0COMPLETEDSobol-0.0224490.7041860.2500060.3797110.4583230.9616750.123115
222_0COMPLETEDSobol-0.1451910.7711540.7555370.5341440.5890930.7011010.255063
333_0COMPLETEDSobol-0.8604690.2871910.000370.1370090.0002160.1767610.664214
444_0COMPLETEDSobol-0.0446380.4079010.9769370.30230.1952480.0067090.822607
555_0COMPLETEDMBM-1.537470.07315500.253160.1484630.207740.816667
666_0COMPLETEDMBM-0.77462900000.3716880.665134
777_0COMPLETEDMBM-0.5958140.19642300.4348220.1666260.0373950.885145
888_0COMPLETEDMBM-0.9180210.0353300.4055420.1639290.2346410.990631
999_0COMPLETEDMBM-1.45712000.2184510.2442350.3136670.90227
101010_0COMPLETEDMBM-0.565202000.24063800.1076050.607965
111111_0COMPLETEDMBM-1.90258000.2491270.2363540.2841690.819713
121212_0COMPLETEDMBM-2.402390.02540700.3366050.3056140.291960.756491
131313_0COMPLETEDMBM-2.667840.11909100.4394160.3796540.2964430.695929
141414_0COMPLETEDMBM-1.7346000.259260.454920.2542970.652627
151515_0COMPLETEDMBM-2.203280.17845700.5055810.4374710.3690490.725332
161616_0COMPLETEDMBM-2.966180.25710400.5294680.3027240.2908070.647679
171717_0COMPLETEDMBM-2.20429000.672440.3173150.2758110.636231
181818_0COMPLETEDMBM-2.45390.4574300.437860.2852960.3088760.619494
191919_0COMPLETEDMBM-2.587930.24004900.1397180.3070230.3008960.655223
202020_0COMPLETEDMBM-3.034030.2351730.2996350.4777950.2991520.2966470.676215
212121_0COMPLETEDMBM-0.5740520.2170130.862030.4503610.2845830.3222640.614581
222222_0COMPLETEDMBM-3.253460.2389370.1611560.4752360.2993040.2881840.678222
232323_0COMPLETEDMBM-2.846810.2860020.172720.5205150.3322470.2473390.725811
242424_0COMPLETEDMBM-3.213620.2036980.1554760.419770.2735320.3259930.606466
252525_0COMPLETEDMBM-3.212330.2252110.156830.4646450.2530810.3502490.679414
262626_0COMPLETEDMBM-3.282410.2118830.1642810.4341950.2590750.2983930.647052
272727_0COMPLETEDMBM-3.169110.2278730.186710.5389310.2311340.3024790.621384
282828_0COMPLETEDMBM-3.299230.1693040.1745560.4669380.2702020.3163570.657593
292929_0COMPLETEDMBM-3.177830.1651820.2090010.3990350.2468050.3038440.662241

Sensitivity Analysis for hartmann6

Understand how each parameter affects hartmann6 according to a second-order sensitivity analysis.

loading...

x1 vs. hartmann6

The slice plot provides a one-dimensional view of predicted outcomes for hartmann6 as a function of a single parameter, while keeping all other parameters fixed at their status_quo value (or mean value if status_quo is unavailable). This visualization helps in understanding the sensitivity and impact of changes in the selected parameter on the predicted metric outcomes.

loading...

x4 vs. hartmann6

The slice plot provides a one-dimensional view of predicted outcomes for hartmann6 as a function of a single parameter, while keeping all other parameters fixed at their status_quo value (or mean value if status_quo is unavailable). This visualization helps in understanding the sensitivity and impact of changes in the selected parameter on the predicted metric outcomes.

loading...

x2, x4 vs. hartmann6

The contour plot visualizes the predicted outcomes for hartmann6 across a two-dimensional parameter space, with other parameters held fixed at their status_quo value (or mean value if status_quo is unavailable). This plot helps in identifying regions of optimal performance and understanding how changes in the selected parameters influence the predicted outcomes. Contour lines represent levels of constant predicted values, providing insights into the gradient and potential optima within the parameter space.

loading...

x5, x6 vs. hartmann6

The contour plot visualizes the predicted outcomes for hartmann6 across a two-dimensional parameter space, with other parameters held fixed at their status_quo value (or mean value if status_quo is unavailable). This plot helps in identifying regions of optimal performance and understanding how changes in the selected parameters influence the predicted outcomes. Contour lines represent levels of constant predicted values, providing insights into the gradient and potential optima within the parameter space.

loading...

x2, x5 vs. hartmann6

The contour plot visualizes the predicted outcomes for hartmann6 across a two-dimensional parameter space, with other parameters held fixed at their status_quo value (or mean value if status_quo is unavailable). This plot helps in identifying regions of optimal performance and understanding how changes in the selected parameters influence the predicted outcomes. Contour lines represent levels of constant predicted values, providing insights into the gradient and potential optima within the parameter space.

loading...

Cross Validation for hartmann6

The cross-validation plot displays the model fit for each metric in the experiment. It employs a leave-one-out approach, where the model is trained on all data except one sample, which is used for validation. The plot shows the predicted outcome for the validation set on the y-axis against its actual value on the x-axis. Points that align closely with the dotted diagonal line indicate a strong model fit, signifying accurate predictions. Additionally, the plot includes 95% confidence intervals that provide insight into the noise in observations and the uncertainty in model predictions. A horizontal, flat line of predictions indicates that the model has not picked up on sufficient signal in the data, and instead is just predicting the mean.

loading...

Conclusion

This tutorial demonstrates how to use Ax's Client for ask-tell optimization of Python functions using the Hartmann6 function as an example. You can adjust the function and parameters to suit your specific optimization problem.