The Ax Service API is designed to allow the user to control scheduling of trials and data computation while having an easy to use interface with Ax.
The user iteratively:
from ax.service.ax_client import AxClient
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 12-26 23:30:05] ipy_plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
Create a client object to interface with Ax APIs. By default this runs locally without storage.
ax_client = AxClient()
[INFO 12-26 23:30:05] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 2 decimal points.
An experiment consists of a search space (parameters and parameter constraints) and optimization configuration (objective name, minimization setting, and outcome constraints). Note that:
name
, parameters
, and objective_name
arguments are required.parameters
have the following required keys: "name" - parameter name, "type" - parameter type ("range", "choice" or "fixed"), "bounds" for range parameters, "values" for choice parameters, and "value" for fixed parameters.parameters
can optionally include "value_type" ("int", "float", "bool" or "str"), "log_scale" flag for range parameters, and "is_ordered" flag for choice parameters.parameter_constraints
should be a list of strings of form "p1 >= p2" or "p1 + p2 <= some_bound".outcome_constraints
should be a list of strings of form "constrained_metric <= some_bound".ax_client.create_experiment(
name="hartmann_test_experiment",
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [0.0, 1.0],
"value_type": "float", # Optional, defaults to inference from type of "bounds".
"log_scale": False, # Optional, defaults to False.
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x3",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x4",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x5",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x6",
"type": "range",
"bounds": [0.0, 1.0],
},
],
objective_name="hartmann6",
minimize=True, # Optional, defaults to False.
parameter_constraints=["x1 + x2 <= 2.0"], # Optional.
outcome_constraints=["l2norm <= 1.25"], # Optional.
)
[INFO 12-26 23:30:05] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 6 arms, GPEI for subsequent arms], generated 0 arm(s) so far). Iterations after 6 will take longer to generate due to model-fitting.
When using Ax a service, evaluation of parameterizations suggested by Ax is done either locally or, more commonly, using an external scheduler. Below is a dummy evaluation function that outputs data for two metrics "hartmann6" and "l2norm". Note that all returned metrics correspond to either the objective_name
set on experiment creation or the metric names mentioned in outcome_constraints
.
import numpy as np
def evaluate(parameters):
x = np.array([parameters.get(f"x{i+1}") for i in range(6)])
# In our case, standard error is 0, since we are computing a synthetic function.
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x ** 2).sum()), 0.0)}
Result of the evaluation should generally be a mapping of the format: {metric_name -> (mean, SEM)}
. If there is only one metric in the experiment – the objective – then evaluation function can return a single tuple of mean and SEM, in which case Ax will assume that evaluation corresponds to the objective. It can also return only the mean as a float, in which case Ax will treat SEM as unknown and use a model that can infer it.
For more details on evaluation function, refer to the "Trial Evaluation" section in the Ax docs at ax.dev
With the experiment set up, we can start the optimization loop.
At each step, the user queries the client for a new trial then submits the evaluation of that trial back to the client.
Note that Ax auto-selects an appropriate optimization algorithm based on the search space. For more advance use cases that require a specific optimization algorithm, pass a generation_strategy
argument into the AxClient
constructor. Note that when Bayesian Optimization is used, generating new trials may take a few minutes.
for i in range(25):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
[INFO 12-26 23:30:05] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.31, 'x2': 0.41, 'x3': 0.58, 'x4': 0.62, 'x5': 0.83, 'x6': 0.97}. [INFO 12-26 23:30:05] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.06, 0.0), 'l2norm': (1.62, 0.0)}. [INFO 12-26 23:30:05] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.78, 'x2': 0.19, 'x3': 0.89, 'x4': 0.86, 'x5': 0.58, 'x6': 0.4}. [INFO 12-26 23:30:05] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.01, 0.0), 'l2norm': (1.63, 0.0)}. [INFO 12-26 23:30:05] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.51, 'x2': 0.66, 'x3': 0.42, 'x4': 0.95, 'x5': 0.44, 'x6': 0.7}. [INFO 12-26 23:30:05] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-0.02, 0.0), 'l2norm': (1.57, 0.0)}. [INFO 12-26 23:30:05] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.39, 'x2': 0.32, 'x3': 0.42, 'x4': 0.44, 'x5': 0.0, 'x6': 0.13}. [INFO 12-26 23:30:05] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.23, 0.0), 'l2norm': (0.8, 0.0)}. [INFO 12-26 23:30:05] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.89, 'x2': 0.36, 'x3': 0.61, 'x4': 0.56, 'x5': 0.6, 'x6': 0.72}. [INFO 12-26 23:30:05] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.09, 0.0), 'l2norm': (1.57, 0.0)}. [INFO 12-26 23:30:05] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.12, 'x2': 0.69, 'x3': 0.53, 'x4': 0.07, 'x5': 0.65, 'x6': 0.94}. [INFO 12-26 23:30:05] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-0.21, 0.0), 'l2norm': (1.44, 0.0)}. [INFO 12-26 23:30:11] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.35, 'x2': 0.32, 'x3': 0.38, 'x4': 0.25, 'x5': 0.0, 'x6': 0.09}. [INFO 12-26 23:30:11] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (-0.13, 0.0), 'l2norm': (0.67, 0.0)}. [INFO 12-26 23:30:16] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.38, 'x2': 0.29, 'x3': 0.42, 'x4': 0.57, 'x5': 0.0, 'x6': 0.09}. [INFO 12-26 23:30:16] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-0.2, 0.0), 'l2norm': (0.86, 0.0)}. [INFO 12-26 23:30:21] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.43, 'x2': 0.37, 'x3': 0.47, 'x4': 0.47, 'x5': 0.0, 'x6': 0.27}. [INFO 12-26 23:30:21] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.28, 0.0), 'l2norm': (0.92, 0.0)}. [INFO 12-26 23:30:26] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.57, 'x2': 0.38, 'x3': 0.51, 'x4': 0.46, 'x5': 0.0, 'x6': 0.28}. [INFO 12-26 23:30:26] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.19, 0.0), 'l2norm': (1.0, 0.0)}. [INFO 12-26 23:30:30] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.35, 'x2': 0.39, 'x3': 0.49, 'x4': 0.48, 'x5': 0.0, 'x6': 0.34}. [INFO 12-26 23:30:30] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.3, 0.0), 'l2norm': (0.93, 0.0)}. [INFO 12-26 23:30:33] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.35, 'x2': 0.43, 'x3': 0.37, 'x4': 0.49, 'x5': 0.0, 'x6': 0.35}. [INFO 12-26 23:30:33] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (-0.3, 0.0), 'l2norm': (0.9, 0.0)}. [INFO 12-26 23:30:37] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.35, 'x2': 0.53, 'x3': 0.46, 'x4': 0.49, 'x5': 0.0, 'x6': 0.28}. [INFO 12-26 23:30:37] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-0.54, 0.0), 'l2norm': (0.96, 0.0)}. [INFO 12-26 23:30:41] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.32, 'x2': 0.58, 'x3': 0.5, 'x4': 0.5, 'x5': 0.0, 'x6': 0.22}. [INFO 12-26 23:30:41] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-0.84, 0.0), 'l2norm': (0.99, 0.0)}. [INFO 12-26 23:30:45] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.29, 'x2': 0.61, 'x3': 0.54, 'x4': 0.52, 'x5': 0.0, 'x6': 0.16}. [INFO 12-26 23:30:45] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-1.14, 0.0), 'l2norm': (1.02, 0.0)}. [INFO 12-26 23:30:49] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.23, 'x2': 0.67, 'x3': 0.59, 'x4': 0.55, 'x5': 0.0, 'x6': 0.08}. [INFO 12-26 23:30:49] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-1.31, 0.0), 'l2norm': (1.08, 0.0)}. [INFO 12-26 23:30:52] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.32, 'x2': 0.71, 'x3': 0.54, 'x4': 0.55, 'x5': 0.0, 'x6': 0.04}. [INFO 12-26 23:30:52] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-2.17, 0.0), 'l2norm': (1.09, 0.0)}. [INFO 12-26 23:30:57] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.37, 'x2': 0.76, 'x3': 0.51, 'x4': 0.56, 'x5': 0.0, 'x6': 0.0}. [INFO 12-26 23:30:57] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-2.71, 0.0), 'l2norm': (1.14, 0.0)}. [INFO 12-26 23:31:02] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.43, 'x2': 0.82, 'x3': 0.49, 'x4': 0.57, 'x5': 0.0, 'x6': 0.0}. [INFO 12-26 23:31:02] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-3.0, 0.0), 'l2norm': (1.19, 0.0)}. [INFO 12-26 23:31:07] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.44, 'x2': 0.89, 'x3': 0.42, 'x4': 0.5, 'x5': 0.0, 'x6': 0.0}. [INFO 12-26 23:31:07] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-2.87, 0.0), 'l2norm': (1.18, 0.0)}. [INFO 12-26 23:31:12] ax.service.ax_client: Generated new trial 20 with parameters {'x1': 0.49, 'x2': 0.83, 'x3': 0.5, 'x4': 0.57, 'x5': 0.0, 'x6': 0.0}. [INFO 12-26 23:31:12] ax.service.ax_client: Completed trial 20 with data: {'hartmann6': (-2.67, 0.0), 'l2norm': (1.23, 0.0)}. [INFO 12-26 23:31:17] ax.service.ax_client: Generated new trial 21 with parameters {'x1': 0.4, 'x2': 0.87, 'x3': 0.46, 'x4': 0.63, 'x5': 0.0, 'x6': 0.0}. [INFO 12-26 23:31:17] ax.service.ax_client: Completed trial 21 with data: {'hartmann6': (-3.0, 0.0), 'l2norm': (1.23, 0.0)}. [INFO 12-26 23:31:22] ax.service.ax_client: Generated new trial 22 with parameters {'x1': 0.41, 'x2': 0.86, 'x3': 0.52, 'x4': 0.57, 'x5': 0.0, 'x6': 0.0}. [INFO 12-26 23:31:22] ax.service.ax_client: Completed trial 22 with data: {'hartmann6': (-3.1, 0.0), 'l2norm': (1.23, 0.0)}. [INFO 12-26 23:31:24] ax.service.ax_client: Generated new trial 23 with parameters {'x1': 0.41, 'x2': 0.87, 'x3': 0.54, 'x4': 0.55, 'x5': 0.06, 'x6': 0.0}. [INFO 12-26 23:31:24] ax.service.ax_client: Completed trial 23 with data: {'hartmann6': (-3.09, 0.0), 'l2norm': (1.23, 0.0)}. [INFO 12-26 23:31:26] ax.service.ax_client: Generated new trial 24 with parameters {'x1': 0.4, 'x2': 0.87, 'x3': 0.55, 'x4': 0.55, 'x5': 0.01, 'x6': 0.05}. [INFO 12-26 23:31:26] ax.service.ax_client: Completed trial 24 with data: {'hartmann6': (-3.15, 0.0), 'l2norm': (1.23, 0.0)}.
To view all trials in a data frame at any point during optimization:
ax_client.get_trials_data_frame().sort_values('trial_index')
/home/travis/virtualenv/python3.6.7/lib/python3.6/site-packages/pandas/core/reshape/merge.py:617: UserWarning: merging between different levels can give an unintended result (2 levels on the left, 1 on the right)
arm_name | hartmann6 | l2norm | trial_index | x1 | x2 | x3 | x4 | x5 | x6 | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 0_0 | -0.0606299 | 1.61759 | 0 | 0.310906 | 0.407444 | 0.582223 | 0.617649 | 8.295004e-01 | 9.723035e-01 |
3 | 1_0 | -0.00572623 | 1.63184 | 1 | 0.781468 | 0.187781 | 0.888098 | 0.855807 | 5.820131e-01 | 3.963242e-01 |
15 | 2_0 | -0.0163962 | 1.56872 | 2 | 0.507982 | 0.656542 | 0.423899 | 0.954383 | 4.363919e-01 | 7.005734e-01 |
18 | 3_0 | -0.233885 | 0.80015 | 3 | 0.385936 | 0.316529 | 0.423110 | 0.442906 | 1.759466e-03 | 1.261393e-01 |
19 | 4_0 | -0.0918833 | 1.57429 | 4 | 0.892442 | 0.355338 | 0.605893 | 0.555304 | 5.992248e-01 | 7.218899e-01 |
20 | 5_0 | -0.214311 | 1.43647 | 5 | 0.117173 | 0.687110 | 0.528716 | 0.071017 | 6.469737e-01 | 9.351199e-01 |
21 | 6_0 | -0.132802 | 0.666084 | 6 | 0.349495 | 0.322772 | 0.381869 | 0.250479 | 1.292763e-04 | 9.368020e-02 |
22 | 7_0 | -0.200459 | 0.859587 | 7 | 0.377313 | 0.288288 | 0.420522 | 0.573758 | 2.045889e-16 | 8.588981e-02 |
23 | 8_0 | -0.279728 | 0.915493 | 8 | 0.429584 | 0.373276 | 0.465343 | 0.474229 | 8.915081e-18 | 2.698401e-01 |
24 | 9_0 | -0.190884 | 1.00393 | 9 | 0.565577 | 0.375114 | 0.507565 | 0.459624 | 1.071711e-16 | 2.800346e-01 |
1 | 10_0 | -0.295634 | 0.933772 | 10 | 0.348957 | 0.394733 | 0.494081 | 0.481202 | 7.790745e-18 | 3.444897e-01 |
2 | 11_0 | -0.300362 | 0.900709 | 11 | 0.352322 | 0.434823 | 0.366225 | 0.489821 | 0.000000e+00 | 3.521782e-01 |
4 | 12_0 | -0.541619 | 0.962302 | 12 | 0.350319 | 0.527707 | 0.455618 | 0.488006 | 0.000000e+00 | 2.812296e-01 |
5 | 13_0 | -0.840437 | 0.991107 | 13 | 0.319639 | 0.575055 | 0.498960 | 0.501695 | 0.000000e+00 | 2.208544e-01 |
6 | 14_0 | -1.1364 | 1.0215 | 14 | 0.286135 | 0.612375 | 0.539528 | 0.519865 | 3.278844e-17 | 1.588770e-01 |
7 | 15_0 | -1.31173 | 1.07973 | 15 | 0.232671 | 0.670582 | 0.594063 | 0.549815 | 2.331421e-14 | 8.236678e-02 |
8 | 16_0 | -2.16881 | 1.09274 | 16 | 0.317195 | 0.706251 | 0.539044 | 0.550333 | 3.693922e-17 | 3.525614e-02 |
9 | 17_0 | -2.70805 | 1.13871 | 17 | 0.369063 | 0.761882 | 0.514576 | 0.561436 | 0.000000e+00 | 0.000000e+00 |
10 | 18_0 | -2.99695 | 1.19499 | 18 | 0.427990 | 0.823595 | 0.486812 | 0.574039 | 1.968971e-13 | 5.488203e-13 |
11 | 19_0 | -2.87123 | 1.18322 | 19 | 0.436866 | 0.887870 | 0.416639 | 0.497246 | 3.693059e-09 | 6.385638e-09 |
12 | 20_0 | -2.67388 | 1.22702 | 20 | 0.491638 | 0.829262 | 0.504835 | 0.566858 | 4.156778e-14 | 3.156853e-14 |
13 | 21_0 | -3.00049 | 1.23231 | 21 | 0.396128 | 0.868620 | 0.457594 | 0.630693 | 0.000000e+00 | 3.061767e-14 |
14 | 22_0 | -3.10161 | 1.22629 | 22 | 0.407784 | 0.862140 | 0.522871 | 0.566412 | 2.218278e-16 | 6.731504e-16 |
16 | 23_0 | -3.0915 | 1.22998 | 23 | 0.405211 | 0.866550 | 0.542770 | 0.547067 | 6.213968e-02 | 1.770796e-13 |
17 | 24_0 | -3.14594 | 1.23316 | 24 | 0.404800 | 0.868307 | 0.546848 | 0.548325 | 1.473532e-02 | 5.418825e-02 |
Once it's complete, we can access the best parameters found, as well as the corresponding metric values.
best_parameters, values = ax_client.get_best_parameters()
best_parameters
{'x1': 0.4077836769973997, 'x2': 0.8621399321206178, 'x3': 0.5228708230436103, 'x4': 0.5664115601602521, 'x5': 2.218277644905342e-16, 'x6': 6.731504173344668e-16}
means, covariances = values
means
{'hartmann6': -3.101606599293558, 'l2norm': 1.2262903371818101}
For comparison, Hartmann6 minimum:
hartmann6.fmin
-3.32237
Here we arbitrarily select "x1" and "x2" as the two parameters to plot for both metrics, "hartmann6" and "l2norm".
render(ax_client.get_contour_plot())
[INFO 12-26 23:31:26] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Ramaining parameters are affixed to the middle of their range.
We can also retrieve a contour plot for the other metric, "l2norm" –– say, we are interested in seeing the response surface for parameters "x3" and "x4" for this one.
render(ax_client.get_contour_plot(param_x="x3", param_y="x4", metric_name="l2norm"))
[INFO 12-26 23:31:28] ax.service.ax_client: Retrieving contour plot with parameter 'x3' on X-axis and 'x4' on Y-axis, for metric 'l2norm'. Ramaining parameters are affixed to the middle of their range.
Here we plot the optimization trace, showing the progression of finding the point with the optimal objective:
render(ax_client.get_optimization_trace(objective_optimum=hartmann6.fmin)) # Objective_optimum is optional.
We can serialize the state of optimization to JSON and save it to a .json
file or save it to the SQL backend. For the former:
ax_client.save_to_json_file() # For custom filepath, pass `filepath` argument.
[INFO 12-26 23:31:30] ax.service.ax_client: Saved JSON-serialized state of optimization to `ax_client_snapshot.json`.
restored_ax_client = AxClient.load_from_json_file() # For custom filepath, pass `filepath` argument.
[INFO 12-26 23:31:31] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 2 decimal points.
To store state of optimization to an SQL backend, first follow setup instructions on Ax website.
Having set up the SQL backend, pass DBSettings
to AxClient
on instantiation (note that SQLAlchemy
dependency will have to be installed – for installation, refer to optional dependencies on Ax website):
from ax.storage.sqa_store.structs import DBSettings
# URL is of the form "dialect+driver://username:password@host:port/database".
db_settings = DBSettings(url="postgresql+psycopg2://sarah:c82i94d@ocalhost:5432/foobar")
# Instead of URL, can provide a `creator function`; can specify custom encoders/decoders if necessary.
new_ax = AxClient(db_settings=db_settings)
[INFO 12-26 23:31:31] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 2 decimal points.
When valid DBSettings
are passed into AxClient
, a unique experiment name is a required argument (name
) to ax_client.create_experiment
. The state of the optimization is auto-saved any time it changes (i.e. a new trial is added or completed, etc).
To reload an optimization state later, instantiate AxClient
with the same DBSettings
and use ax_client.load_experiment_from_database(experiment_name="my_experiment")
.
Evaluation failure: should any optimization iterations fail during evaluation, log_trial_failure
will ensure that the same trial is not proposed again.
_, trial_index = ax_client.get_next_trial()
ax_client.log_trial_failure(trial_index=trial_index)
[INFO 12-26 23:31:32] ax.service.ax_client: Generated new trial 25 with parameters {'x1': 0.39, 'x2': 0.71, 'x3': 0.5, 'x4': 0.77, 'x5': 0.79, 'x6': 0.03}. [INFO 12-26 23:31:32] ax.service.ax_client: Registered failure of trial 25.
Adding custom trials: should there be need to evaluate a specific parameterization, attach_trial
will add it to the experiment.
ax_client.attach_trial(parameters={"x1": 9.0, "x2": 9.0, "x3": 9.0, "x4": 9.0, "x5": 9.0, "x6": 9.0})
[INFO 12-26 23:31:32] ax.service.ax_client: Attached custom parameterization {'x1': 9.0, 'x2': 9.0, 'x3': 9.0, 'x4': 9.0, 'x5': 9.0, 'x6': 9.0} as trial 26.
({'x1': 9.0, 'x2': 9.0, 'x3': 9.0, 'x4': 9.0, 'x5': 9.0, 'x6': 9.0}, 26)
Need to run many trials in parallel: for optimal results and optimization efficiency, we strongly recommend sequential optimization (generating a few trials, then waiting for them to be completed with evaluation data). However, if your use case needs to dispatch many trials in parallel before they are updated with data and you are running into the "All trials for current model have been generated, but not enough data has been observed to fit next model" error, instantiate AxClient
as AxClient(enforce_sequential_optimization=False)
.
Total runtime of script: 1 minutes, 29.79 seconds.