Skip to main content
Version: Next

Automating Orchestration

Previously, we've demonstrated using Ax for ask-tell optimization, a paradigm in which we "ask" Ax for candidate configurations and "tell" Ax our observations. This can be effective in many scenerios, and it can be automated through use of flow control statements like for and while loops. However there are some situations where it would be beneficial to allow Ax to orchestrate the entire optimization: deploying trials to external systems, polling their status, and reading reading their results. This can be common in a number of real world engineering tasks, including:

  • Large scale machine learning experiments running workloads on high-performance computing clusters
  • A/B tests conducted using an external experimentation platform
  • Materials science optimizations utilizing a self-driving laboratory

Ax's Client can orchestrate automated adaptive experiments like this using its method run_trials. Users create custom classes which implement Ax's IMetric and IRunner protocols to handle data fetching and trial deployment respectively. Then, users simply configure their Client as they would normally and call run_trials; Ax will deploy trials, fetch data, generate candidates, and repeat as necessary. Ax can manage complex orchestration tasks including launching multiple trials in parallel while still respecting a user-defined concurrency limit, and gracefully handling trial failure by allowing the experiment to continue even if some trials do not complete successfully or data fetching fails.

In this tutorial we will optimize the Hartmann6 function as before, but we will configure custom Runners and Metrics to mimic an external execution system. The Runner will calculate Hartmann6 with the appropriate parameters, write the result to a file, and tell Ax the trial is ready after 5 seconds. The Metric will find the appropriate file and report the results back to Ax.

Learning Objectives

  • Learn when it can be appropriate and/or advantageous to run Ax in a closed-loop
  • Configure custom Runners and Metrics, allowing Ax to deploy trials and fetch data automatically
  • Understand tradeoffs between parallelism and optimization performance

Prerequisites

Step 1: Import Necessary Modules

First, ensure you have all the necessary imports:

import os
import time
from typing import Any, Mapping

import numpy as np
from ax.api.client import Client
from ax.api.configs import RangeParameterConfig
from ax.api.protocols.metric import IMetric
from ax.api.protocols.runner import IRunner, TrialStatus
from ax.api.types import TParameterization

Step 2: Defining our custom Runner and Metric

As stated before, we will be creating custom Runner and Metric classes to mimic an external system. Let's start by defining our Hartmann6 function as before.

# Hartmann6 function
def hartmann6(x1, x2, x3, x4, x5, x6):
alpha = np.array([1.0, 1.2, 3.0, 3.2])
A = np.array([
[10, 3, 17, 3.5, 1.7, 8],
[0.05, 10, 17, 0.1, 8, 14],
[3, 3.5, 1.7, 10, 17, 8],
[17, 8, 0.05, 10, 0.1, 14]
])
P = 10**-4 * np.array([
[1312, 1696, 5569, 124, 8283, 5886],
[2329, 4135, 8307, 3736, 1004, 9991],
[2348, 1451, 3522, 2883, 3047, 6650],
[4047, 8828, 8732, 5743, 1091, 381]
])

outer = 0.0
for i in range(4):
inner = 0.0
for j, x in enumerate([x1, x2, x3, x4, x5, x6]):
inner += A[i, j] * (x - P[i, j])**2
outer += alpha[i] * np.exp(-inner)
return -outer

hartmann6(0.1, 0.45, 0.8, 0.25, 0.552, 1.0)
Output:
np.float64(-0.4878737485613134)

Next, we will define the MockRunner. The MockRunner requires two methods: run_trial and poll_trial.

run_trial deploys a trial to the external system with the given parameters. In this case, we will simply save a file containing the result of a call to the Hartmann6 function.

poll_trial queries the external system to see if the trial has completed, failed, or if it's still running. In this mock example, we will check to see how many seconds have elapsed since the run_trial was called and only report a trial as completed once 5 seconds have elapsed.

Runner's may also optionally implement a stop_trial method to terminate a trial's execution before it has completed. This is necessary for using early stopping in closed-loop experimentation, but we will skip this for now.

class MockRunner(IRunner):
def run_trial(
self, trial_index: int, parameterization: TParameterization
) -> dict[str, Any]:
file_name = f"{int(time.time())}.txt"

x1 = parameterization["x1"]
x2 = parameterization["x2"]
x3 = parameterization["x3"]
x4 = parameterization["x4"]
x5 = parameterization["x5"]
x6 = parameterization["x6"]

result = hartmann6(x1, x2, x3, x4, x5, x6)

with open(file_name, "w") as f:
f.write(f"{result}")

return {"file_name": file_name}

def poll_trial(
self, trial_index: int, trial_metadata: Mapping[str, Any]
) -> TrialStatus:
file_name = trial_metadata["file_name"]
time_elapsed = time.time() - int(file_name[:4])

if time_elapsed < 5:
return TrialStatus.RUNNING

return TrialStatus.COMPLETED

It's worthwhile to instantiate your Runner and test it is behaving as expected. Let's deploy a mock trial by manually calling run_trial and ensuring it creates a file.

runner = MockRunner()

trial_metadata = runner.run_trial(
trial_index=-1,
parameterization={
"x1": 0.1,
"x2": 0.45,
"x3": 0.8,
"x4": 0.25,
"x5": 0.552,
"x6": 1.0,
},
)

os.path.exists(trial_metadata["file_name"])
Output:
True

Now, we will implement the Metric. Metrics only need to implement a fetch method, which returns a progression value (i.e. a step in a timeseries) and an observation value. Note that the observation can either be a simple float or a (mean, SEM) pair if the external system can report observed noise.

In this case, we have neither a relevant progression value nor observed noise so we will simply read the file and report (0, value).

class MockMetric(IMetric):
def fetch(
self,
trial_index: int,
trial_metadata: Mapping[str, Any],
) -> tuple[int, float | tuple[float, float]]:
file_name = trial_metadata["file_name"]

with open(file_name, 'r') as file:
value = float(file.readline())
return (0, value)

Again, let's validate the Metric created above by instantiating it and reporting the value from the file generated during testing of the Runner.

# Note: all Metrics must have a name. This will become relevant when attaching metrics to the Client
hartmann6_metric = MockMetric(name="hartmann6")

hartmann6_metric.fetch(trial_index=-1, trial_metadata=trial_metadata)
Output:
(0, -0.4878737485613134)

Step 3: Initialize the Client and Configure the Experiment

Finally, we can initialize the Client and configure the experiment as before. This will be familiar to readers of the Getting Started with Ax tutorial -- the only difference is we will attach the previously defined Runner and Metric by calling configure_runner and configure_metrics respectively.

Note that when initializing hartmann6_metric we set name=hartmann6, matching the objective we now set in configure_optimization. The configure_metrics method uses this name to ensure that data fetched by this Metric is used correctly during the experiment. Be careful to correctly set the name of the Metric to reflect its use as an objective or outcome constraint.

client = Client()
# Define six float parameters for the Hartmann6 function
parameters = [
RangeParameterConfig(name=f"x{i + 1}", parameter_type="float", bounds=(0, 1))
for i in range(6)
]

client.configure_experiment(
parameters=parameters,
# The following arguments are only necessary when saving to the DB
name="hartmann6_experiment",
description="Optimization of the Hartmann6 function",
owner="developer",
)
client.configure_optimization(objective="-hartmann6")
client.configure_runner(runner=runner)
client.configure_metrics(metrics=[hartmann6_metric])

Step 5: Run trials

Once the Client has been configured, we can begin running trials.

Internally, Ax uses a class named Scheduler to orchestrate the trial deployment, polling, data fetching, and candidate generation.

Scheduler state machine

The run_trials method provides users with control over various orchestration settings as well as the total maximum number of trials to evaluate:

  • parallelism defines the maximum number of trials that may be run at once. If your external system supports multiple evaluations in parallel, increasing this number can significantly decrease experimentation time. However, it is important to note that as parallelism increases, optimiztion performance often decreases. This is because adaptive experimentation methods rely on previously observed data for candidate generation -- the more tirals that have been observed prior to generation of a new candidate, the more accurate Ax's model will be for generation of that candidate.
  • tolerated_trial_failure_rate sets the proportion of trials are allowed to fail before Ax raises an Exception. Depending on how expensive a single trial is to evaluate or how unreliable trials are expected to be, the experimenter may want to be notified as soon as a single trial fails or they may not care until more than half the trials are failing. Set this value as is appropriate for your context.
  • initial_seconds_between_polls sets the frequency at which the status of a trial is checked and the results are attempted to be fetched. Set this to be low for trials that are expected to complete quickly or high for trials the are expected to take a long time.
client.run_trials(
max_trials=30,
parallelism=3,
tolerated_trial_failure_rate=0.1,
initial_seconds_between_polls=1,
)
Output:
[INFO 01-22 17:46:00] ax.api.client: GenerationStrategy(name='Center+Sobol+MBM:fast', nodes=[CenterGenerationNode(next_node_name='Sobol'), GenerationNode(name='Sobol', generator_specs=[GeneratorSpec(generator_enum=Sobol, generator_key_override=None)], transition_criteria=[MinTrials(transition_to='MBM'), MinTrials(transition_to='MBM')]), GenerationNode(name='MBM', generator_specs=[GeneratorSpec(generator_enum=BoTorch, generator_key_override=None)], transition_criteria=[])]) chosen based on user input and problem structure.
[INFO 01-22 17:46:00] Orchestrator: Orchestrator requires experiment to have immutable search space and optimization config. Setting property immutable_search_space_and_opt_config to True on experiment.
[INFO 01-22 17:46:00] Orchestrator: Running trials [0]...
[INFO 01-22 17:46:01] Orchestrator: Running trials [1]...
[INFO 01-22 17:46:02] Orchestrator: Running trials [2]...
[INFO 01-22 17:46:03] Orchestrator: Retrieved COMPLETED trials: 0 - 2.
[INFO 01-22 17:46:03] Orchestrator: Running trials [3]...
[INFO 01-22 17:46:04] Orchestrator: Running trials [4]...
[INFO 01-22 17:46:06] Orchestrator: Running trials [5]...
[INFO 01-22 17:46:07] Orchestrator: Retrieved COMPLETED trials: 3 - 5.
[INFO 01-22 17:46:07] Orchestrator: Running trials [6]...
[INFO 01-22 17:46:09] Orchestrator: Running trials [7]...
[INFO 01-22 17:46:10] Orchestrator: Running trials [8]...
[INFO 01-22 17:46:11] Orchestrator: Retrieved COMPLETED trials: 6 - 8.
[INFO 01-22 17:46:11] Orchestrator: Running trials [9]...
[INFO 01-22 17:46:13] Orchestrator: Running trials [10]...
[INFO 01-22 17:46:14] Orchestrator: Running trials [11]...
[INFO 01-22 17:46:15] Orchestrator: Retrieved COMPLETED trials: 9 - 11.
[INFO 01-22 17:46:16] Orchestrator: Running trials [12]...
[INFO 01-22 17:46:17] Orchestrator: Running trials [13]...
[INFO 01-22 17:46:19] Orchestrator: Running trials [14]...
[INFO 01-22 17:46:20] Orchestrator: Retrieved COMPLETED trials: 12 - 14.
[INFO 01-22 17:46:20] Orchestrator: Running trials [15]...
[INFO 01-22 17:46:22] Orchestrator: Running trials [16]...
[INFO 01-22 17:46:23] Orchestrator: Running trials [17]...
[INFO 01-22 17:46:24] Orchestrator: Retrieved COMPLETED trials: 15 - 17.
[INFO 01-22 17:46:25] Orchestrator: Running trials [18]...
[INFO 01-22 17:46:26] Orchestrator: Running trials [19]...
[INFO 01-22 17:46:27] Orchestrator: Running trials [20]...
[INFO 01-22 17:46:28] Orchestrator: Retrieved COMPLETED trials: 18 - 20.
[INFO 01-22 17:46:29] Orchestrator: Running trials [21]...
[INFO 01-22 17:46:30] Orchestrator: Running trials [22]...
[INFO 01-22 17:46:32] Orchestrator: Running trials [23]...
[INFO 01-22 17:46:33] Orchestrator: Retrieved COMPLETED trials: 21 - 23.
[INFO 01-22 17:46:33] Orchestrator: Running trials [24]...
[INFO 01-22 17:46:35] Orchestrator: Running trials [25]...
[INFO 01-22 17:46:36] Orchestrator: Running trials [26]...
[INFO 01-22 17:46:37] Orchestrator: Retrieved COMPLETED trials: 24 - 26.
[INFO 01-22 17:46:38] Orchestrator: Running trials [27]...
[INFO 01-22 17:46:39] Orchestrator: Running trials [28]...
[INFO 01-22 17:46:40] Orchestrator: Running trials [29]...
[INFO 01-22 17:46:41] Orchestrator: Retrieved COMPLETED trials: 27 - 29.

Step 6: Analyze Results

As before, Ax can compute the best parameterization observed and produce a number of analyses to help interpret the results of the experiment.

It is also worth noting that the experiment can be resumed at any time using Ax's storage functionality. When configured to use a SQL databse, the Client saves a snapshot of itself at various points throughout the call to run_trials, making it incredibly easy to continue optimization after an unexpected failure. You can learn more about storage in Ax here.

best_parameters, prediction, index, name = client.get_best_parameterization()
print("Best Parameters:", best_parameters)
print("Prediction (mean, variance):", prediction)
Output:
Best Parameters: {'x1': 0.40642611409459417, 'x2': 0.9014106023130555, 'x3': 1.0, 'x4': 0.5823863995843687, 'x5': 0.0, 'x6': 0.0}
Prediction (mean, variance): {'hartmann6': (np.float64(-3.059961884952793), np.float64(0.0033639705085552938))}
# display=True instructs Ax to sort then render the resulting analyses
cards = client.compute_analyses(display=True)
Output:
[ERROR 01-22 17:46:43] ax.analysis.analysis: Failed to compute CrossValidationPlot
[ERROR 01-22 17:46:43] ax.analysis.analysis: Traceback (most recent call last):
File "/home/runner/work/Ax/Ax/ax/analysis/analysis.py", line 107, in compute_result
card = self.compute(
^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/analysis/plotly/cross_validation.py", line 134, in compute
cv_results = cross_validate(
^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/cross_validation.py", line 155, in cross_validate
return _fold_cross_validate(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/cross_validation.py", line 440, in _fold_cross_validate
cv_test_observations = t.transform_observations(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/transforms/base.py", line 147, in transform_observations
obs_data = self._transform_observation_data(observation_data=obs_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/transforms/winsorize.py", line 147, in _transform_observation_data
obsd.means[idx] = max(
~~~~~~~~~~^^^^^
ValueError: assignment destination is read-only
[ERROR 01-22 17:46:43] ax.analysis.analysis: Failed to compute PredictableMetricsAnalysis
[ERROR 01-22 17:46:43] ax.analysis.analysis: Traceback (most recent call last):
File "/home/runner/work/Ax/Ax/ax/analysis/analysis.py", line 107, in compute_result
card = self.compute(
^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/analysis/healthcheck/predictable_metrics.py", line 151, in compute
warning_message = warn_if_unpredictable_metrics(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/service/utils/report_utils.py", line 1467, in warn_if_unpredictable_metrics
model_fit_dict = compute_model_fit_metrics_from_adapter(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/cross_validation.py", line 780, in compute_model_fit_metrics_from_adapter
y_obs, y_pred, se_pred = predict_func(adapter=adapter, untransform=untransform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/cross_validation.py", line 903, in _predict_on_cross_validation_data
cv = cross_validate(adapter=adapter, untransform=untransform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/cross_validation.py", line 155, in cross_validate
return _fold_cross_validate(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/cross_validation.py", line 440, in _fold_cross_validate
cv_test_observations = t.transform_observations(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/transforms/base.py", line 147, in transform_observations
obs_data = self._transform_observation_data(observation_data=obs_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/Ax/Ax/ax/adapter/transforms/winsorize.py", line 147, in _transform_observation_data
obsd.means[idx] = max(
~~~~~~~~~~^^^^^
ValueError: assignment destination is read-only

Modeled Arm Effects on hartmann6

Modeled effects on hartmann6. This plot visualizes predictions of the true metric changes for each arm based on Ax's model. This is the expected delta you would expect if you (re-)ran that arm. This plot helps in anticipating the outcomes and performance of arms based on the model's predictions. Note, flat predictions across arms indicate that the model predicts that there is no effect, meaning if you were to re-run the experiment, the delta you would see would be small and fall within the confidence interval indicated in the plot.

loading...

Observed Arm Effects on hartmann6

Observed effects on hartmann6. This plot visualizes the effects from previously-run arms on a specific metric, providing insights into their performance. This plot allows one to compare and contrast the effectiveness of different arms, highlighting which configurations have yielded the most favorable outcomes.

loading...

Utility Progression

Shows the best hartmann6 value achieved so far across completed trials (objective is to minimize). The x-axis shows trace index, which counts completed or early-stopped trials sequentially (1, 2, 3, ...). This differs from trial index, which may have gaps if some trials failed or were abandoned. For example, if trials 0, 2, and 5 completed while trials 1, 3, and 4 failed, the trace indices would be 1, 2, 3 corresponding to trial indices 0, 2, 5. The y-axis shows cumulative best utility. Only improvements are plotted, so flat segments indicate trials that didn't surpass the previous best. Infeasible trials (violating outcome constraints) don't contribute to the improvements.

loading...

hartmann6 by progression

The progression plot tracks the evolution of each metric over the course of the experiment. This visualization is typically used to monitor the improvement of metrics over Trial iterations, but can also be useful in informing decisions about early stopping for Trials.

loading...

hartmann6 by wallclock time

The progression plot tracks the evolution of each metric over the course of the experiment. This visualization is typically used to monitor the improvement of metrics over Trial iterations, but can also be useful in informing decisions about early stopping for Trials.

loading...

Best Trial for Experiment

Displays the trial with the best objective value based on raw observations. This reflects actual measured performance during execution. This trial achieved the optimal objective value and represents the recommended configuration for your optimization goal. Only considering COMPLETED trials.

trial_indexarm_nametrial_statusgeneration_nodehartmann6x1x2x3x4x5x6
02323_0COMPLETEDMBM-3.1190.4064260.90141110.58238600

Summary for hartmann6_experiment

High-level summary of the Trial-s in this Experiment

trial_indexarm_nametrial_statusgeneration_nodehartmann6x1x2x3x4x5x6
000_0COMPLETEDCenterOfSearchSpace-0.5053150.50.50.50.50.50.5
111_0COMPLETEDSobol-0.0072160.9161660.1343560.556810.8684670.419260.310295
222_0COMPLETEDSobol-0.0573880.0001780.5784840.0041270.0227380.5941480.983505
333_0COMPLETEDSobol-0.0346610.4871330.4545830.9127530.5070460.7905820.571452
444_0COMPLETEDSobol-0.8714250.5867740.7585020.399160.3527250.2233440.151898
555_0COMPLETEDMBM-0.4266130.4129440.4703230.2699850.6498720.4160710.656547
666_0COMPLETEDMBM-0.143330.6049230.6653160.2700660.0877780.2456060.14411
777_0COMPLETEDMBM-2.250510.4663930.8173350.5163330.5315440.1127070.168812
888_0COMPLETEDMBM-0.4454630.6766320.8774140.2854570.4442810.4137640.247696
999_0COMPLETEDMBM-2.651780.436680.8710510.5278530.633280.0100810.134213
101010_0COMPLETEDMBM-2.047040.4171250.9038610.5455760.4674520.1733470.193675
111111_0COMPLETEDMBM-2.675290.4350150.7423650.7295590.59426900.044744
121212_0COMPLETEDMBM-2.303550.35710.9494350.5720570.7261300.001593
131313_0COMPLETEDMBM-0.119660.3400680.266370.0272160.69751600.009969
141414_0COMPLETEDMBM-2.109140.41097110.73942600.085723
151515_0COMPLETEDMBM-0.088990.31847210.7261880.5751700.546891
161616_0COMPLETEDMBM-0.7010250.69027710.7465460.57781200
171717_0COMPLETEDMBM-0.173732010.7085950.56097100
181818_0COMPLETEDMBM-1.707590.4224960.7951330.321940.80739900.026956
191919_0COMPLETEDMBM-1.509230.401310.76806710.82171600.085993
202020_0COMPLETEDMBM-2.746620.4217330.82225600.65120700
212121_0COMPLETEDMBM-3.06020.3942550.8847380.2185080.57885400
222222_0COMPLETEDMBM-3.05430.3933690.8781240.3591050.5955390.3891810
232323_0COMPLETEDMBM-3.1190.4064260.90141110.58238600
242424_0COMPLETEDMBM-2.826610.3911150.97040710.5573190.7056350
252525_0COMPLETEDMBM-2.995270.3600130.89769410.54551300
262626_0COMPLETEDMBM-2.745010.40377710.2799140.55279400
272727_0COMPLETEDMBM-3.071870.3804330.85004910.5739840.2552750
282828_0COMPLETEDMBM-2.437180.3407620.80371600.53560710
292929_0COMPLETEDMBM-0.3539950.1976390.55281500.33665100

Sensitivity Analysis for hartmann6

Understand how each parameter affects hartmann6 according to a second-order sensitivity analysis.

loading...

hartmann6 vs. x6

The slice plot provides a one-dimensional view of predicted outcomes for hartmann6 as a function of a single parameter, while keeping all other parameters fixed at their status_quo value (or mean value if status_quo is unavailable). This visualization helps in understanding the sensitivity and impact of changes in the selected parameter on the predicted metric outcomes.

loading...

hartmann6 vs. x4

The slice plot provides a one-dimensional view of predicted outcomes for hartmann6 as a function of a single parameter, while keeping all other parameters fixed at their status_quo value (or mean value if status_quo is unavailable). This visualization helps in understanding the sensitivity and impact of changes in the selected parameter on the predicted metric outcomes.

loading...

hartmann6 (Mean) vs. x2, x6

The contour plot visualizes the predicted outcomes for hartmann6 across a two-dimensional parameter space, with other parameters held fixed at their status_quo value (or mean value if status_quo is unavailable). This plot helps in identifying regions of optimal performance and understanding how changes in the selected parameters influence the predicted outcomes. Contour lines represent levels of constant predicted values, providing insights into the gradient and potential optima within the parameter space.

loading...

CrossValidationPlot Error

ValueError encountered while computing CrossValidationPlot.

Generation Strategy Graph

GenerationStrategy: Center+Sobol+MBM:fast

Visualize the structure of a GenerationStrategy as a directed graph. Each node represents a GenerationNode in the strategy, and edges represent transitions between nodes based on TransitionCriterion. Edge labels show the criterion class names that trigger the transition.

node_namegeneratorstransitionsis_current
0CenterOfSearchSpacenan-> Sobol: AutoTransitionAfterGenFalse
1SobolSobol-> MBM: MinTrials(5), MinTrials(2)False
2MBMBoTorchnanTrue

PredictableMetricsAnalysis Error

ValueError encountered while computing PredictableMetricsAnalysis.

Conclusion

This tutorial demonstrates how to use Ax's Client for closed-loop optimization using the Hartmann6 function as an example. This style of optimization is useful in scenarios where trials are evaluated on some external system or when experimenters wish to take advantage of parallel evaluation, trial failure handling, or simply to manage long-running optimization tasks without human intervention. You can define your own Runner and Metric classes to communicate with whatever external systems you wish to interface with, and control optimization using the OrchestrationConfig.