Trial-level early stopping
Trial-level early stopping aims to monitor the results of expensive evaluations with timeseries-like data and terminate those that are unlikely to produce promising results prior to completing that evaluation. This reduces computational waste, and enables the same amount of resources to explore more configurations. Early stopping is useful for expensive to evaluate problems where stepwise information is available on the way to the final measurement.
Like the Getting Started tutorial we'll be minimizing the Hartmann6 function, but this time we've modified it to incorporate a new parameter which allows the function to produce timeseries-like data where the value returned is closer and closer to Hartmann6's true value as increases. At the function will simply return Hartmann6's unaltered value.
While the function is synthetic, the workflow captures the intended principles for this tutorial and is similar to the process of training typical machine learning models.
Learning Objectives
- Understand when time-series-like data can be used in an optimization experiment
- Run a simple optimization experiment with early stopping
- Configure details of an early stopping strategy
- Analyze the results of the optimization
Prerequisites
- Familiarity with Python and basic programming concepts
- Understanding of adaptive experimentation and Bayesian optimization
- Getting Started with Ax
Step 1: Import Necessary Modules
First, ensure you have all the necessary imports:
import numpy as np
from ax.api.client import Client
from ax.api.configs import RangeParameterConfig
Step 2: Initialize the Client
Create an instance of the Client
to manage the state of your experiment.
client = Client()
Step 3: Configure the Experiment
The Client
instance can be configured with a series of Config
s that define how the
experiment will be run.
The Hartmann6 problem is usually evaluated on the hypercube , so we will
define six identical RangeParameterConfig
s with these bounds.
You may specify additional features like parameter constraints to further refine the search space and parameter scaling to help navigate parameters with nonuniform effects.
# Define six float parameters for the Hartmann6 function
parameters = [
RangeParameterConfig(
name=f"x{i + 1}", parameter_type="float", bounds=(0, 1)
)
for i in range(6)
]
client.configure_experiment(parameters=parameters)
Step 4: Configure Optimization
Now, we must configure the objective for this optimization, which we do using
Client.configure_optimization
. This method expects a string objective
, an expression
containing either a single metric to maximize, a linear combination of metrics to
maximize, or a tuple of multiple metrics to jointly maximize. These expressions are
parsed using SymPy. For example:
"score"
would direct Ax to maximize a metric named score"-loss"
would direct Ax to Ax to minimize a metric named loss"task_0 + 0.5 * task_1"
would direct Ax to maximize the sum of two task scores, downweighting task_1 by a factor of 0.5"score, -flops"
would direct Ax to simultaneously maximize score while minimizing flops
See these recipes for more information on configuring objectives and outcome constraints.
client.configure_optimization(objective="-hartmann6")
Step 5: Run Trials with early stopping
Here, we will configure the ask-tell loop.
We begin by defining our Hartmann6 function as written above. Remember, this is just an example problem and any Python function can be substituted here.
Then we will iteratively do the following:
- Call
client.get_next_trials
to "ask" Ax for a parameterization to evaluate - Evaluate
hartmann6_curve
using those parameters in an inner loop to simulate the generation of timeseries data - "Tell" Ax the partial result using
client.attach_data
- Query whether the trial should be stopped via
client.should_stop_trial_early
- Stop the underperforming trial and report back to Ax that is has been stopped
This loop will run multiple trials to optimize the function.
Ax will configure an EarlyStoppingStrategy when should_stop_trial_early
is called for
the first time. By default Ax uses a Percentile early stopping strategy which will
terminate a trial early if its performance falls below a percentile threshold when
compared to other trials at the same step. Early stopping can only occur after a minimum
number of progressions
to prevent premature early stopping. This validates that both
enough data is gathered to make a decision and there is a minimum number of completed
trials with curve data; these completed trials establish a baseline.
# Hartmann6 function
def hartmann6(x1, x2, x3, x4, x5, x6):
alpha = np.array([1.0, 1.2, 3.0, 3.2])
A = np.array(
[
[10, 3, 17, 3.5, 1.7, 8],
[0.05, 10, 17, 0.1, 8, 14],
[3, 3.5, 1.7, 10, 17, 8],
[17, 8, 0.05, 10, 0.1, 14],
]
)
P = 10**-4 * np.array(
[
[1312, 1696, 5569, 124, 8283, 5886],
[2329, 4135, 8307, 3736, 1004, 9991],
[2348, 1451, 3522, 2883, 3047, 6650],
[4047, 8828, 8732, 5743, 1091, 381],
]
)
outer = 0.0
for i in range(4):
inner = 0.0
for j, x in enumerate([x1, x2, x3, x4, x5, x6]):
inner += A[i, j] * (x - P[i, j]) ** 2
outer += alpha[i] * np.exp(-inner)
return -outer
# Hartmann6 function with additional t term such that
# hartmann6(X) == hartmann6_curve(X, t=100)
def hartmann6_curve(x1, x2, x3, x4, x5, x6, t):
return hartmann6(x1, x2, x3, x4, x5, x6) - np.log2(t / 100)
(
hartmann6(0.1, 0.45, 0.8, 0.25, 0.552, 1.0),
hartmann6_curve(0.1, 0.45, 0.8, 0.25, 0.552, 1.0, 100),
)
(np.float64(-0.4878737485613134), np.float64(-0.4878737485613134))
maximum_progressions = 100 # Observe hartmann6_curve over 100 progressions
for _ in range(30): # Run 30 rounds of trials
trials = client.get_next_trials(max_trials=3)
for trial_index, parameters in trials.items():
for t in range(1, maximum_progressions + 1):
raw_data = {"hartmann6": hartmann6_curve(t=t, **parameters)}
# On the final reading call complete_trial and break, else call attach_data
if t == maximum_progressions:
client.complete_trial(
trial_index=trial_index, raw_data=raw_data, progression=t
)
break
client.attach_data(
trial_index=trial_index, raw_data=raw_data, progression=t
)
# If the trial is underperforming, stop it
if client.should_stop_trial_early(trial_index=trial_index):
client.mark_trial_early_stopped(trial_index=trial_index)
break
[WARNING 05-23 05:06:11] ax.api.client: 3 trials requested but only 2 could be generated.
[WARNING 05-23 05:06:15] ax.api.client: 3 trials requested but only 1 could be generated.
[INFO 05-23 05:06:45] ax.early_stopping.strategies.percentile: Early stoppinging trial 11: Trial objective value 2.6017366018638723 is worse than 50.0-th percentile (2.4509037916409793) across comparable trials..
[INFO 05-23 05:06:57] ax.early_stopping.strategies.percentile: Early stoppinging trial 14: Trial objective value 3.23625391435411 is worse than 50.0-th percentile (2.300070981418086) across comparable trials..
[INFO 05-23 05:07:02] ax.early_stopping.strategies.percentile: Early stoppinging trial 15: Trial objective value 3.1414865743921254 is worse than 50.0-th percentile (2.4509037916409793) across comparable trials..
[INFO 05-23 05:07:05] ax.early_stopping.strategies.percentile: Early stoppinging trial 17: Trial objective value 3.024080540099374 is worse than 50.0-th percentile (2.4509037916409793) across comparable trials..
[INFO 05-23 05:07:12] ax.early_stopping.strategies.percentile: Early stoppinging trial 18: Trial objective value 2.8372988541482886 is worse than 50.0-th percentile (2.6017366018638723) across comparable trials..
[INFO 05-23 05:07:12] ax.early_stopping.strategies.percentile: Early stoppinging trial 19: Trial objective value 2.6404123642574286 is worse than 50.0-th percentile (2.6210744830606503) across comparable trials..
[INFO 05-23 05:07:12] ax.early_stopping.strategies.percentile: Early stoppinging trial 20: Trial objective value 2.747533375247382 is worse than 50.0-th percentile (2.6404123642574286) across comparable trials..
[INFO 05-23 05:07:17] ax.early_stopping.strategies.percentile: Early stoppinging trial 21: Trial objective value 2.843416482596313 is worse than 50.0-th percentile (2.6939728697524052) across comparable trials..
[INFO 05-23 05:07:18] ax.early_stopping.strategies.percentile: Early stoppinging trial 22: Trial objective value 2.5025820982208256 is worse than 50.0-th percentile (2.064327769899763) across comparable trials..
[INFO 05-23 05:07:18] ax.early_stopping.strategies.percentile: Early stoppinging trial 23: Trial objective value 2.7598548491658224 is worse than 50.0-th percentile (2.6939728697524052) across comparable trials..
[INFO 05-23 05:07:24] ax.early_stopping.strategies.percentile: Early stoppinging trial 24: Trial objective value 2.8406883693190386 is worse than 50.0-th percentile (2.747533375247382) across comparable trials..
[INFO 05-23 05:07:24] ax.early_stopping.strategies.percentile: Early stoppinging trial 25: Trial objective value 2.4869316366076424 is worse than 50.0-th percentile (2.1134476137839573) across comparable trials..
[INFO 05-23 05:07:24] ax.early_stopping.strategies.percentile: Early stoppinging trial 26: Trial objective value 2.7798485780644233 is worse than 50.0-th percentile (2.747533375247382) across comparable trials..
[INFO 05-23 05:07:29] ax.early_stopping.strategies.percentile: Early stoppinging trial 27: Trial objective value 2.8491788803067335 is worse than 50.0-th percentile (2.753694112206602) across comparable trials..
[INFO 05-23 05:07:30] ax.early_stopping.strategies.percentile: Early stoppinging trial 28: Trial objective value 2.462417602993606 is worse than 50.0-th percentile (2.1625674576681515) across comparable trials..
[INFO 05-23 05:07:30] ax.early_stopping.strategies.percentile: Early stoppinging trial 29: Trial objective value 2.7818300674504846 is worse than 50.0-th percentile (2.753694112206602) across comparable trials..
[INFO 05-23 05:07:35] ax.early_stopping.strategies.percentile: Early stoppinging trial 30: Trial objective value 2.8477869761564074 is worse than 50.0-th percentile (2.7598548491658224) across comparable trials..
[INFO 05-23 05:07:35] ax.early_stopping.strategies.percentile: Early stoppinging trial 31: Trial objective value 2.5286049941211184 is worse than 50.0-th percentile (2.3124925303308785) across comparable trials..
[INFO 05-23 05:07:35] ax.early_stopping.strategies.percentile: Early stoppinging trial 32: Trial objective value 2.813928667587638 is worse than 50.0-th percentile (2.7598548491658224) across comparable trials..
[INFO 05-23 05:07:40] ax.early_stopping.strategies.percentile: Early stoppinging trial 33: Trial objective value 2.8503075492824514 is worse than 50.0-th percentile (2.7698517136151226) across comparable trials..
[INFO 05-23 05:07:41] ax.early_stopping.strategies.percentile: Early stoppinging trial 34: Trial objective value 2.472513087232102 is worse than 50.0-th percentile (2.462417602993606) across comparable trials..
[INFO 05-23 05:07:41] ax.early_stopping.strategies.percentile: Early stoppinging trial 35: Trial objective value 2.8067931649448408 is worse than 50.0-th percentile (2.7698517136151226) across comparable trials..
[INFO 05-23 05:07:46] ax.early_stopping.strategies.percentile: Early stoppinging trial 36: Trial objective value 2.8426536074569335 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:07:46] ax.early_stopping.strategies.percentile: Early stoppinging trial 37: Trial objective value 2.4814228909399647 is worse than 50.0-th percentile (2.467465345112854) across comparable trials..
[INFO 05-23 05:07:46] ax.early_stopping.strategies.percentile: Early stoppinging trial 38: Trial objective value 2.814697376691976 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:07:51] ax.early_stopping.strategies.percentile: Early stoppinging trial 39: Trial objective value 2.841310144607435 is worse than 50.0-th percentile (2.780839322757454) across comparable trials..
[INFO 05-23 05:07:52] ax.early_stopping.strategies.percentile: Early stoppinging trial 40: Trial objective value 2.487976829378484 is worse than 50.0-th percentile (2.472513087232102) across comparable trials..
[INFO 05-23 05:07:52] ax.early_stopping.strategies.percentile: Early stoppinging trial 41: Trial objective value 2.6321092857051633 is worse than 50.0-th percentile (2.4769679890860337) across comparable trials..
[INFO 05-23 05:07:57] ax.early_stopping.strategies.percentile: Early stoppinging trial 42: Trial objective value 2.842468916629395 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:07:57] ax.early_stopping.strategies.percentile: Early stoppinging trial 43: Trial objective value 2.489068218862704 is worse than 50.0-th percentile (2.4814228909399647) across comparable trials..
[INFO 05-23 05:07:57] ax.early_stopping.strategies.percentile: Early stoppinging trial 44: Trial objective value 2.7923749063195036 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:02] ax.early_stopping.strategies.percentile: Early stoppinging trial 45: Trial objective value 2.839590708604018 is worse than 50.0-th percentile (2.780839322757454) across comparable trials..
[INFO 05-23 05:08:03] ax.early_stopping.strategies.percentile: Early stoppinging trial 46: Trial objective value 2.34913087499494 is worse than 50.0-th percentile (1.938796887815904) across comparable trials..
[INFO 05-23 05:08:03] ax.early_stopping.strategies.percentile: Early stoppinging trial 47: Trial objective value 2.6117392919255598 is worse than 50.0-th percentile (2.4814228909399647) across comparable trials..
[INFO 05-23 05:08:08] ax.early_stopping.strategies.percentile: Early stoppinging trial 48: Trial objective value 2.8427128362595226 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:09] ax.early_stopping.strategies.percentile: Early stoppinging trial 49: Trial objective value 2.3471392754045213 is worse than 50.0-th percentile (1.9879167317000985) across comparable trials..
[INFO 05-23 05:08:09] ax.early_stopping.strategies.percentile: Early stoppinging trial 50: Trial objective value 2.850999726361262 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:14] ax.early_stopping.strategies.percentile: Early stoppinging trial 51: Trial objective value 2.8283041960614215 is worse than 50.0-th percentile (2.780839322757454) across comparable trials..
[INFO 05-23 05:08:14] ax.early_stopping.strategies.percentile: Early stoppinging trial 52: Trial objective value 2.331033526494839 is worse than 50.0-th percentile (2.0370365755842927) across comparable trials..
[INFO 05-23 05:08:14] ax.early_stopping.strategies.percentile: Early stoppinging trial 53: Trial objective value 2.602768107264358 is worse than 50.0-th percentile (2.478042324009382) across comparable trials..
[INFO 05-23 05:08:19] ax.early_stopping.strategies.percentile: Early stoppinging trial 54: Trial objective value 2.851243416406168 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:20] ax.early_stopping.strategies.percentile: Early stoppinging trial 55: Trial objective value 2.5057777300159993 is worse than 50.0-th percentile (2.4814228909399647) across comparable trials..
[INFO 05-23 05:08:20] ax.early_stopping.strategies.percentile: Early stoppinging trial 56: Trial objective value 2.8174615420084512 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:25] ax.early_stopping.strategies.percentile: Early stoppinging trial 57: Trial objective value 2.8391733279941547 is worse than 50.0-th percentile (2.780839322757454) across comparable trials..
[INFO 05-23 05:08:25] ax.early_stopping.strategies.percentile: Early stoppinging trial 58: Trial objective value 2.494046418115243 is worse than 50.0-th percentile (2.4841772637738035) across comparable trials..
[INFO 05-23 05:08:26] ax.early_stopping.strategies.percentile: Early stoppinging trial 59: Trial objective value 2.62118433813093 is worse than 50.0-th percentile (2.4869316366076424) across comparable trials..
[INFO 05-23 05:08:31] ax.early_stopping.strategies.percentile: Early stoppinging trial 60: Trial objective value 2.8421190811109174 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:31] ax.early_stopping.strategies.percentile: Early stoppinging trial 61: Trial objective value 2.5476939827967664 is worse than 50.0-th percentile (2.487454232993063) across comparable trials..
[INFO 05-23 05:08:32] ax.early_stopping.strategies.percentile: Early stoppinging trial 62: Trial objective value 2.8117128653164305 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:37] ax.early_stopping.strategies.percentile: Early stoppinging trial 63: Trial objective value 2.8443323663306277 is worse than 50.0-th percentile (2.780839322757454) across comparable trials..
[INFO 05-23 05:08:37] ax.early_stopping.strategies.percentile: Early stoppinging trial 64: Trial objective value 2.357455939716329 is worse than 50.0-th percentile (2.184035051039566) across comparable trials..
[INFO 05-23 05:08:38] ax.early_stopping.strategies.percentile: Early stoppinging trial 65: Trial objective value 2.638888097282676 is worse than 50.0-th percentile (2.487454232993063) across comparable trials..
[INFO 05-23 05:08:43] ax.early_stopping.strategies.percentile: Early stoppinging trial 66: Trial objective value 2.84847702003935 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:43] ax.early_stopping.strategies.percentile: Early stoppinging trial 67: Trial objective value 2.4955653140383216 is worse than 50.0-th percentile (2.487976829378484) across comparable trials..
[INFO 05-23 05:08:44] ax.early_stopping.strategies.percentile: Early stoppinging trial 68: Trial objective value 2.792154301252884 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:49] ax.early_stopping.strategies.percentile: Early stoppinging trial 69: Trial objective value 2.847014971224 is worse than 50.0-th percentile (2.780839322757454) across comparable trials..
[INFO 05-23 05:08:49] ax.early_stopping.strategies.percentile: Early stoppinging trial 70: Trial objective value 2.338635788137748 is worse than 50.0-th percentile (2.331033526494839) across comparable trials..
[INFO 05-23 05:08:50] ax.early_stopping.strategies.percentile: Early stoppinging trial 71: Trial objective value 2.637628016207716 is worse than 50.0-th percentile (2.487976829378484) across comparable trials..
[INFO 05-23 05:08:56] ax.early_stopping.strategies.percentile: Early stoppinging trial 72: Trial objective value 2.853625995808975 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:08:57] ax.early_stopping.strategies.percentile: Early stoppinging trial 73: Trial objective value 2.201979116899227 is worse than 50.0-th percentile (1.823319670395968) across comparable trials..
[INFO 05-23 05:08:57] ax.early_stopping.strategies.percentile: Early stoppinging trial 74: Trial objective value 2.7874378273076417 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:09:02] ax.early_stopping.strategies.percentile: Early stoppinging trial 75: Trial objective value 2.8390152369475987 is worse than 50.0-th percentile (2.780839322757454) across comparable trials..
[INFO 05-23 05:09:03] ax.early_stopping.strategies.percentile: Early stoppinging trial 76: Trial objective value 2.3335647631761014 is worse than 50.0-th percentile (2.331033526494839) across comparable trials..
[INFO 05-23 05:09:03] ax.early_stopping.strategies.percentile: Early stoppinging trial 77: Trial objective value 2.6117122590141495 is worse than 50.0-th percentile (2.487454232993063) across comparable trials..
[INFO 05-23 05:09:09] ax.early_stopping.strategies.percentile: Early stoppinging trial 78: Trial objective value 2.840779466999408 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:09:10] ax.early_stopping.strategies.percentile: Early stoppinging trial 79: Trial objective value 2.359461688958223 is worse than 50.0-th percentile (2.33229914483547) across comparable trials..
[INFO 05-23 05:09:10] ax.early_stopping.strategies.percentile: Early stoppinging trial 80: Trial objective value 2.7935866192638743 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:09:15] ax.early_stopping.strategies.percentile: Early stoppinging trial 81: Trial objective value 2.840872121035296 is worse than 50.0-th percentile (2.780839322757454) across comparable trials..
[INFO 05-23 05:09:15] ax.early_stopping.strategies.percentile: Early stoppinging trial 82: Trial objective value 2.4939192612303356 is worse than 50.0-th percentile (2.487454232993063) across comparable trials..
[INFO 05-23 05:09:16] ax.early_stopping.strategies.percentile: Early stoppinging trial 83: Trial objective value 2.6342657477999984 is worse than 50.0-th percentile (2.487976829378484) across comparable trials..
[INFO 05-23 05:09:21] ax.early_stopping.strategies.percentile: Early stoppinging trial 84: Trial objective value 2.8502663156355417 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
[INFO 05-23 05:09:22] ax.early_stopping.strategies.percentile: Early stoppinging trial 85: Trial objective value 2.4954021620586673 is worse than 50.0-th percentile (2.4885225241205937) across comparable trials..
[INFO 05-23 05:09:22] ax.early_stopping.strategies.percentile: Early stoppinging trial 86: Trial objective value 2.8189543966007804 is worse than 50.0-th percentile (2.7798485780644233) across comparable trials..
Step 6: Analyze Results
After running trials, you can analyze the results. Most commonly this means extracting the parameterization from the best performing trial you conducted.
best_parameters, prediction, index, name = client.get_best_parameterization()
print("Best Parameters:", best_parameters)
print("Prediction (mean, variance):", prediction)
Best Parameters: {'x1': 0.21862565420755356, 'x2': 0.21515374023453923, 'x3': 0.5777739372136484, 'x4': 0.2142151814718712, 'x5': 0.2990427880129937, 'x6': 0.4820732220850142}
Prediction (mean, variance): {'hartmann6': (np.float64(-2.3867584837501417), np.float64(0.002736964619028704))}
Step 7: Compute Analyses
Ax can also produce a number of analyses to help interpret the results of the experiment
via client.compute_analyses
. Users can manually select which analyses to run, or can
allow Ax to select which would be most relevant. In this case Ax selects the following:
- Parrellel Coordinates Plot shows which parameterizations were evaluated and what metric values were observed -- this is useful for getting a high level overview of how thoroughly the search space was explored and which regions tend to produce which outcomes
- Progression Plot shows each partial observation observed by Ax for each trial in a timeseries
- Sensitivity Analysis Plot shows which parameters have the largest affect on the objective using Sobol Indicies
- Slice Plot shows how the model predicts a single parameter effects the objective along with a confidence interval
- Contour Plot shows how the model predicts a pair of parameters effects the objective as a 2D surface
- Summary lists all trials generated along with their parameterizations, observations, and miscellaneous metadata
- Cross Validation helps to visualize how well the surrogate model is able to predict out of sample points
# display=True instructs Ax to sort then render the resulting analyses
cards = client.compute_analyses(display=True)
Parallel Coordinates for hartmann6
The parallel coordinates plot displays multi-dimensional data by representing each parameter as a parallel axis. This plot helps in assessing how thoroughly the search space has been explored and in identifying patterns or clusterings associated with high-performing (good) or low-performing (bad) arms. By tracing lines across the axes, one can observe correlations and interactions between parameters, gaining insights into the relationships that contribute to the success or failure of different configurations within the experiment.
Summary for Experiment
High-level summary of the Trial
-s in this Experiment
trial_index | arm_name | trial_status | generation_node | hartmann6 | x1 | x2 | x3 | x4 | x5 | x6 | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 0_0 | COMPLETED | CenterOfSearchSpace | -0.505315 | 0.500000 | 0.500000 | 0.500000 | 0.500000 | 0.500000 | 0.500000 |
1 | 1 | 1_0 | COMPLETED | Sobol | -0.448461 | 0.278974 | 0.011999 | 0.128210 | 0.041009 | 0.122414 | 0.392427 |
2 | 2 | 2_0 | COMPLETED | Sobol | -0.016236 | 0.787707 | 0.856947 | 0.793565 | 0.681461 | 0.641045 | 0.890075 |
3 | 3 | 3_0 | COMPLETED | Sobol | -0.023335 | 0.715233 | 0.275096 | 0.464606 | 0.320532 | 0.896350 | 0.603352 |
4 | 4 | 4_0 | COMPLETED | Sobol | -0.219026 | 0.225899 | 0.617646 | 0.613360 | 0.961027 | 0.368076 | 0.116588 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
82 | 82 | 82_0 | EARLY_STOPPED | MBM | 2.493919 | 0.000000 | 0.109147 | 0.643146 | 0.219348 | 0.371157 | 0.255735 |
83 | 83 | 83_0 | EARLY_STOPPED | MBM | 2.634266 | 0.000000 | 0.187930 | 0.757017 | 0.227383 | 0.182994 | 0.288963 |
84 | 84 | 84_0 | EARLY_STOPPED | MBM | 2.850266 | 0.316604 | 0.175660 | 0.836886 | 0.212248 | 0.297437 | 0.236349 |
85 | 85 | 85_0 | EARLY_STOPPED | MBM | 2.495402 | 0.000000 | 0.108758 | 0.642354 | 0.219348 | 0.370623 | 0.255110 |
86 | 86 | 86_0 | EARLY_STOPPED | MBM | 2.818954 | 0.000000 | 0.188398 | 0.772137 | 0.227498 | 0.181037 | 0.280846 |
Sensitivity Analysis for hartmann6
Understand how each parameter affects hartmann6 according to a second-order sensitivity analysis.
x4 vs. hartmann6
The slice plot provides a one-dimensional view of predicted outcomes for hartmann6 as a function of a single parameter, while keeping all other parameters fixed at their status_quo value (or mean value if status_quo is unavailable). This visualization helps in understanding the sensitivity and impact of changes in the selected parameter on the predicted metric outcomes.
x5 vs. hartmann6
The slice plot provides a one-dimensional view of predicted outcomes for hartmann6 as a function of a single parameter, while keeping all other parameters fixed at their status_quo value (or mean value if status_quo is unavailable). This visualization helps in understanding the sensitivity and impact of changes in the selected parameter on the predicted metric outcomes.
x4, x5 vs. hartmann6
The contour plot visualizes the predicted outcomes for hartmann6 across a two-dimensional parameter space, with other parameters held fixed at their status_quo value (or mean value if status_quo is unavailable). This plot helps in identifying regions of optimal performance and understanding how changes in the selected parameters influence the predicted outcomes. Contour lines represent levels of constant predicted values, providing insights into the gradient and potential optima within the parameter space.
x2 vs. hartmann6
The slice plot provides a one-dimensional view of predicted outcomes for hartmann6 as a function of a single parameter, while keeping all other parameters fixed at their status_quo value (or mean value if status_quo is unavailable). This visualization helps in understanding the sensitivity and impact of changes in the selected parameter on the predicted metric outcomes.
x2, x4 vs. hartmann6
The contour plot visualizes the predicted outcomes for hartmann6 across a two-dimensional parameter space, with other parameters held fixed at their status_quo value (or mean value if status_quo is unavailable). This plot helps in identifying regions of optimal performance and understanding how changes in the selected parameters influence the predicted outcomes. Contour lines represent levels of constant predicted values, providing insights into the gradient and potential optima within the parameter space.
hartmann6 by progression
The progression plot tracks the evolution of each metric over the course of the experiment. This visualization is typically used to monitor the improvement of metrics over Trial iterations, but can also be useful in informing decisions about early stopping for Trials.
Cross Validation for hartmann6
The cross-validation plot displays the model fit for each metric in the experiment. It employs a leave-one-out approach, where the model is trained on all data except one sample, which is used for validation. The plot shows the predicted outcome for the validation set on the y-axis against its actual value on the x-axis. Points that align closely with the dotted diagonal line indicate a strong model fit, signifying accurate predictions. Additionally, the plot includes 95% confidence intervals that provide insight into the noise in observations and the uncertainty in model predictions. A horizontal, flat line of predictions indicates that the model has not picked up on sufficient signal in the data, and instead is just predicting the mean.
Conclusion
This tutorial demonstates Ax's early stopping capabilities, which utilize timeseries-like data to monitor the results of expensive evaluations and terminate those that are unlikely to produce promising results, freeing up resources to explore more configurations. This can be used in a number of applications, and is especially useful in machine learning contexts.