Scheduler
¶We recommend reading through the "Developer API" tutorial before getting started with the Scheduler
, as using it in this tutorial will require an Ax Experiment
and an understanding of the experiment's subcomponents like the search space and the runner.
Scheduler.run_n_trials
.Scheduler.run_trials_and_yield_results
to run the optimization via a generator method.Scheduler
and external systems for trial evaluation¶Scheduler
is a closed-loop manager class in Ax that continuously deploys trial runs to an arbitrary external system in an asynchronous fashion, polls their status from that system, and leverages known trial results to generate more trials.
Key features of the Scheduler
:
Experiment
for optimization setup (an optimization config with metrics, a search space, a runner for trial evaluations),GenerationStrategy
for flexible specification of an optimization algorithm used to generate new trials to run,This scheme summarizes how the scheduler interacts with any external system used to run trial evaluations:
An example of an 'external system' running trial evaluations could be a remote server executing scheduled jobs, a subprocess conducting ML training runs, an engine running physics simulations, etc. For the sake of example here, let us assume a dummy external system with the following client:
from random import randint
from time import time
from typing import Any, Dict, NamedTuple, Union
from ax.core.base_trial import TrialStatus
from ax.utils.measurement.synthetic_functions import branin
class MockJob(NamedTuple):
"""Dummy class to represent a job scheduled on `MockJobQueue`."""
id: int
parameters: Dict[str, Union[str, float, int, bool]]
class MockJobQueueClient:
"""Dummy class to represent a job queue where the Ax `Scheduler` will
deploy trial evaluation runs during optimization.
"""
jobs: Dict[str, MockJob] = {}
def schedule_job_with_parameters(
self, parameters: Dict[str, Union[str, float, int, bool]]
) -> int:
"""Schedules an evaluation job with given parameters and returns job ID."""
# Code to actually schedule the job and produce an ID would go here;
# using timestamp in microseconds as dummy ID for this example.
job_id = int(time() * 1e6)
self.jobs[job_id] = MockJob(job_id, parameters)
return job_id
def get_job_status(self, job_id: int) -> TrialStatus:
""" "Get status of the job by a given ID. For simplicity of the example,
return an Ax `TrialStatus`.
"""
job = self.jobs[job_id]
# Instead of randomizing trial status, code to check actual job status
# would go here.
if randint(0, 3) > 0:
return TrialStatus.COMPLETED
return TrialStatus.RUNNING
def get_outcome_value_for_completed_job(self, job_id: int) -> Dict[str, float]:
"""Get evaluation results for a given completed job."""
job = self.jobs[job_id]
# In a real external system, this would retrieve real relevant outcomes and
# not a synthetic function value.
return {"branin": branin(job.parameters.get("x1"), job.parameters.get("x2"))}
MOCK_JOB_QUEUE_CLIENT = MockJobQueueClient()
def get_mock_job_queue_client() -> MockJobQueueClient:
"""Obtain the singleton job queue instance."""
return MOCK_JOB_QUEUE_CLIENT
As mentioned above, using a Scheduler
requires a fully set up experiment with metrics and a runner. Refer to the "Building Blocks of Ax" tutorial to learn more about those components, as here we assume familiarity with them.
The following runner and metric set up intractions between the Scheduler
and the mock external system we assume:
from collections import defaultdict
from typing import Iterable, Set
from ax.core.base_trial import BaseTrial
from ax.core.runner import Runner
from ax.core.trial import Trial
class MockJobRunner(Runner): # Deploys trials to external system.
def run(self, trial: BaseTrial) -> Dict[str, Any]:
"""Deploys a trial based on custom runner subclass implementation.
Args:
trial: The trial to deploy.
Returns:
Dict of run metadata from the deployment process.
"""
if not isinstance(trial, Trial):
raise ValueError("This runner only handles `Trial`.")
mock_job_queue = get_mock_job_queue_client()
job_id = mock_job_queue.schedule_job_with_parameters(
parameters=trial.arm.parameters
)
# This run metadata will be attached to trial as `trial.run_metadata`
# by the base `Scheduler`.
return {"job_id": job_id}
def poll_trial_status(
self, trials: Iterable[BaseTrial]
) -> Dict[TrialStatus, Set[int]]:
"""Checks the status of any non-terminal trials and returns their
indices as a mapping from TrialStatus to a list of indices. Required
for runners used with Ax ``Scheduler``.
NOTE: Does not need to handle waiting between polling calls while trials
are running; this function should just perform a single poll.
Args:
trials: Trials to poll.
Returns:
A dictionary mapping TrialStatus to a list of trial indices that have
the respective status at the time of the polling. This does not need to
include trials that at the time of polling already have a terminal
(ABANDONED, FAILED, COMPLETED) status (but it may).
"""
status_dict = defaultdict(set)
for trial in trials:
mock_job_queue = get_mock_job_queue_client()
status = mock_job_queue.get_job_status(
job_id=trial.run_metadata.get("job_id")
)
status_dict[status].add(trial.index)
return status_dict
import pandas as pd
from ax.core.metric import Metric
from ax.core.base_trial import BaseTrial
from ax.core.data import Data
class BraninForMockJobMetric(Metric): # Pulls data for trial from external system.
def fetch_trial_data(self, trial: BaseTrial) -> Data:
"""Obtains data via fetching it from ` for a given trial."""
if not isinstance(trial, Trial):
raise ValueError("This metric only handles `Trial`.")
mock_job_queue = get_mock_job_queue_client()
# Here we leverage the "job_id" metadata created by `MockJobRunner.run`.
branin_data = mock_job_queue.get_outcome_value_for_completed_job(
job_id=trial.run_metadata.get("job_id")
)
df_dict = {
"trial_index": trial.index,
"metric_name": "branin",
"arm_name": trial.arm.name,
"mean": branin_data.get("branin"),
# Can be set to 0.0 if function is known to be noiseless
# or to an actual value when SEM is known. Setting SEM to
# `None` results in Ax assuming unknown noise and inferring
# noise level from data.
"sem": None,
}
return Data(df=pd.DataFrame.from_records([df_dict]))
Now we can set up the experiment using the runner and metric we defined. This experiment will have a single-objective optimization config, minimizing the Branin function, and the search space that corresponds to that function.
from ax import *
def make_branin_experiment_with_runner_and_metric() -> Experiment:
parameters = [
RangeParameter(
name="x1",
parameter_type=ParameterType.FLOAT,
lower=-5,
upper=10,
),
RangeParameter(
name="x2",
parameter_type=ParameterType.FLOAT,
lower=0,
upper=15,
),
]
objective=Objective(metric=BraninForMockJobMetric(name="branin"), minimize=True)
return Experiment(
name="branin_test_experiment",
search_space=SearchSpace(parameters=parameters),
optimization_config=OptimizationConfig(objective=objective),
runner=MockJobRunner(),
is_test=True, # Marking this experiment as a test experiment.
)
experiment = make_branin_experiment_with_runner_and_metric()
[INFO 09-15 17:24:05] ax.core.experiment: The is_test flag has been set to True. This flag is meant purely for development and integration testing purposes. If you are running a live experiment, please set this flag to False
A Scheduler
also requires an Ax GenerationStrategy
specifying the algorithm to use for the optimization. Here we use the choose_generation_strategy
utility that auto-picks a generation strategy based on the search space properties. To construct a custom generation strategy instead, refer to the "Generation Strategy" tutorial.
Importantly, a generation strategy in Ax limits allowed parallelism levels for each generation step it contains. If you would like the Scheduler
to ensure parallelism limitations, set max_examples
on each generation step in your generation strategy.
from ax.modelbridge.dispatch_utils import choose_generation_strategy
generation_strategy = choose_generation_strategy(
search_space=experiment.search_space,
max_parallelism_cap=3,
)
[INFO 09-15 17:24:05] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters. [INFO 09-15 17:24:05] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 5 trials, GPEI for subsequent trials]). Iterations after 5 will take longer to generate due to model-fitting.
Now we have all the components needed to start the scheduler:
from ax.service.scheduler import Scheduler, SchedulerOptions
scheduler = Scheduler(
experiment=experiment,
generation_strategy=generation_strategy,
options=SchedulerOptions(),
)
[INFO 09-15 17:24:05] Scheduler: `Scheduler` requires experiment to have immutable search space and optimization config. Setting property immutable_search_space_and_opt_config to `True` on experiment.
from ax.service.utils.report_utils import get_figure_and_callback, get_standard_plots
def make_plot(scheduler: Scheduler):
standard_plots = get_standard_plots(scheduler.experiment, None)
if len(standard_plots) == 0:
raise RuntimeError("Nothing to plot.")
return standard_plots[0]
fig, callback = get_figure_and_callback(make_plot)
Once the Scheduler
instance is set up, user can execute run_n_trials
as many times as needed, and each execution will add up to the specified max_trials
trials to the experiment. The number of trials actually run might be less than max_trials
if the optimization was concluded (e.g. there are no more points in the search space).
display(fig)
scheduler.run_n_trials(max_trials=3, idle_callback=callback)
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/IPython/core/formatters.py:921, in IPythonDisplayFormatter.__call__(self, obj) 919 method = get_real_method(obj, self.print_method) 920 if method is not None: --> 921 method() 922 return True File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/plotly/basewidget.py:741, in BaseFigureWidget._ipython_display_(self) 737 """ 738 Handle rich display of figures in ipython contexts 739 """ 740 # Override BaseFigure's display to make sure we display the widget version --> 741 widgets.DOMWidget._ipython_display_(self) AttributeError: type object 'DOMWidget' has no attribute '_ipython_display_'
[INFO 09-15 17:24:06] Scheduler: Running trials [0]... [INFO 09-15 17:24:06] Scheduler: Running trials [1]... [INFO 09-15 17:24:07] Scheduler: Running trials [2]... [INFO 09-15 17:24:08] Scheduler: Retrieved COMPLETED trials: [0, 2]. [INFO 09-15 17:24:08] Scheduler: Fetching data for trials: [0, 2]. [INFO 09-15 17:24:08] Scheduler: Done submitting trials, waiting for remaining 1 running trials... [INFO 09-15 17:24:08] Scheduler: Retrieved COMPLETED trials: [1]. [INFO 09-15 17:24:08] Scheduler: Fetching data for trials: [1].
OptimizationResult()
We can examine experiment
to see that it now has three trials:
from ax.service.utils.report_utils import exp_to_df
exp_to_df(experiment)
branin | trial_index | arm_name | x1 | x2 | trial_status | generation_method | |
---|---|---|---|---|---|---|---|
0 | 30.288225 | 0 | 0_0 | 0.878688 | 8.464314 | COMPLETED | Sobol |
1 | 86.043780 | 1 | 1_0 | 6.312591 | 9.252478 | COMPLETED | Sobol |
2 | 55.042141 | 2 | 2_0 | 3.090692 | 9.706364 | COMPLETED | Sobol |
Now we can run run_n_trials
again to add three more trials to the experiment (this time, without plotting).
# create a new figure to prevent overwriting the first one
# this isn't necessary if you just care about the final results
second_fig, callback = get_figure_and_callback(make_plot)
display(second_fig)
scheduler.run_n_trials(max_trials=3, idle_callback=callback)
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/IPython/core/formatters.py:921, in IPythonDisplayFormatter.__call__(self, obj) 919 method = get_real_method(obj, self.print_method) 920 if method is not None: --> 921 method() 922 return True File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/plotly/basewidget.py:741, in BaseFigureWidget._ipython_display_(self) 737 """ 738 Handle rich display of figures in ipython contexts 739 """ 740 # Override BaseFigure's display to make sure we display the widget version --> 741 widgets.DOMWidget._ipython_display_(self) AttributeError: type object 'DOMWidget' has no attribute '_ipython_display_'
[INFO 09-15 17:24:09] Scheduler: Running trials [3]... [INFO 09-15 17:24:10] Scheduler: Running trials [4]... [INFO 09-15 17:24:11] Scheduler: Running trials [5]... [INFO 09-15 17:24:12] Scheduler: Retrieved COMPLETED trials: 4 - 5. [INFO 09-15 17:24:12] Scheduler: Fetching data for trials: 4 - 5. [INFO 09-15 17:24:12] Scheduler: Done submitting trials, waiting for remaining 1 running trials... [INFO 09-15 17:24:12] Scheduler: Retrieved COMPLETED trials: [3]. [INFO 09-15 17:24:12] Scheduler: Fetching data for trials: [3].
OptimizationResult()
Examiniming the experiment, we now see 6 trials, one of which is produced by Bayesian optimization (GPEI):
exp_to_df(experiment)
branin | trial_index | arm_name | x1 | x2 | trial_status | generation_method | |
---|---|---|---|---|---|---|---|
0 | 30.288225 | 0 | 0_0 | 0.878688 | 8.464314 | COMPLETED | Sobol |
1 | 86.043780 | 1 | 1_0 | 6.312591 | 9.252478 | COMPLETED | Sobol |
2 | 55.042141 | 2 | 2_0 | 3.090692 | 9.706364 | COMPLETED | Sobol |
3 | 91.114965 | 3 | 3_0 | 5.340370 | 9.872376 | COMPLETED | Sobol |
4 | 78.819018 | 4 | 4_0 | 8.922089 | 10.871861 | COMPLETED | Sobol |
5 | 32.170012 | 5 | 5_0 | -3.870197 | 8.678486 | COMPLETED | GPEI |
For each call to run_n_trials
, one can specify a timeout; if run_n_trials
has been running for too long without finishing its max_trials
, the operation will exit gracefully:
third_fig, callback = get_figure_and_callback(make_plot)
display(third_fig)
scheduler.run_n_trials(max_trials=3, timeout_hours=0.00001, idle_callback=callback)
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/IPython/core/formatters.py:921, in IPythonDisplayFormatter.__call__(self, obj) 919 method = get_real_method(obj, self.print_method) 920 if method is not None: --> 921 method() 922 return True File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/plotly/basewidget.py:741, in BaseFigureWidget._ipython_display_(self) 737 """ 738 Handle rich display of figures in ipython contexts 739 """ 740 # Override BaseFigure's display to make sure we display the widget version --> 741 widgets.DOMWidget._ipython_display_(self) AttributeError: type object 'DOMWidget' has no attribute '_ipython_display_'
[INFO 09-15 17:24:13] Scheduler: Running trials [6]... [ERROR 09-15 17:24:13] Scheduler: Optimization timed out (timeout hours: 1e-05)! [INFO 09-15 17:24:13] Scheduler: `should_abort_optimization` is `True`, not running more trials. [INFO 09-15 17:24:13] Scheduler: Retrieved COMPLETED trials: [6]. [INFO 09-15 17:24:13] Scheduler: Fetching data for trials: [6]. [ERROR 09-15 17:24:13] Scheduler: Optimization timed out (timeout hours: 1e-05)!
OptimizationResult()
When a scheduler is SQL-enabled, it will automatically save all updates it makes to the experiment in the course of the optimization. The experiment can then be resumed in the event of a crash or after a pause.
To store state of optimization to an SQL backend, first follow setup instructions on Ax website. Having set up the SQL backend, pass DBSettings
to the Scheduler
on instantiation (note that SQLAlchemy dependency will have to be installed – for installation, refer to optional dependencies on Ax website):
from ax.storage.registry_bundle import RegistryBundle
from ax.storage.sqa_store.db import create_all_tables, get_engine, init_engine_and_session_factory
from ax.storage.sqa_store.decoder import Decoder
from ax.storage.sqa_store.encoder import Encoder
from ax.storage.sqa_store.sqa_config import SQAConfig
from ax.storage.sqa_store.structs import DBSettings
bundle = RegistryBundle(
metric_clss={BraninForMockJobMetric: None},
runner_clss={MockJobRunner: None}
)
# URL is of the form "dialect+driver://username:password@host:port/database".
# Instead of URL, can provide a `creator function`; can specify custom encoders/decoders if necessary.
db_settings = DBSettings(
url="sqlite:///foo.db",
encoder=bundle.encoder,
decoder=bundle.decoder,
)
# The following lines are only necessary because it is the first time we are using this database
# in practice, you will not need to run these lines every time you initialize your scheduler
init_engine_and_session_factory(url=db_settings.url)
engine = get_engine()
create_all_tables(engine)
stored_experiment = make_branin_experiment_with_runner_and_metric()
generation_strategy = choose_generation_strategy(search_space=experiment.search_space)
scheduler_with_storage = Scheduler(
experiment=stored_experiment,
generation_strategy=generation_strategy,
options=SchedulerOptions(),
db_settings=db_settings,
)
[INFO 09-15 17:24:13] ax.core.experiment: The is_test flag has been set to True. This flag is meant purely for development and integration testing purposes. If you are running a live experiment, please set this flag to False [INFO 09-15 17:24:13] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters. [INFO 09-15 17:24:13] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 5 trials, GPEI for subsequent trials]). Iterations after 5 will take longer to generate due to model-fitting. [INFO 09-15 17:24:13] Scheduler: `Scheduler` requires experiment to have immutable search space and optimization config. Setting property immutable_search_space_and_opt_config to `True` on experiment. [INFO 09-15 17:24:13] ax.service.utils.with_db_settings_base: Experiment branin_test_experiment is not yet in DB, storing it. [INFO 09-15 17:24:13] ax.core.experiment: The is_test flag has been set to True. This flag is meant purely for development and integration testing purposes. If you are running a live experiment, please set this flag to False [INFO 09-15 17:24:13] ax.service.utils.with_db_settings_base: Generation strategy Sobol+GPEI is not yet in DB, storing it.
To resume a stored experiment:
reloaded_experiment_scheduler = Scheduler.from_stored_experiment(
experiment_name="branin_test_experiment",
options=SchedulerOptions(),
# `DBSettings` are also required here so scheduler has access to the
# database, from which it needs to load the experiment.
db_settings=db_settings,
)
[INFO 09-15 17:24:13] ax.service.utils.with_db_settings_base: Loading experiment and generation strategy (with reduced state: True)... [INFO 09-15 17:24:13] ax.core.experiment: The is_test flag has been set to True. This flag is meant purely for development and integration testing purposes. If you are running a live experiment, please set this flag to False [INFO 09-15 17:24:13] ax.service.utils.with_db_settings_base: Loaded experiment branin_test_experiment in 0.02 seconds, loading trials in mini-batches of 10000. [INFO 09-15 17:24:13] ax.service.utils.with_db_settings_base: Loaded generation strategy for experiment branin_test_experiment in 0.01 seconds. /home/runner/work/Ax/Ax/ax/storage/sqa_store/save.py:398: SAWarning: TypeDecorator JSONEncodedText() will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
With the newly reloaded experiment, the Scheduler
can continue the optimization:
reloaded_experiment_scheduler.run_n_trials(max_trials=3)
[INFO 09-15 17:24:13] Scheduler: Running trials [0]... [INFO 09-15 17:24:13] Scheduler: Running trials [1]... [INFO 09-15 17:24:13] Scheduler: Running trials [2]... [INFO 09-15 17:24:13] Scheduler: Retrieved COMPLETED trials: 0 - 2. [INFO 09-15 17:24:13] Scheduler: Fetching data for trials: 0 - 2. /home/runner/work/Ax/Ax/ax/storage/sqa_store/save.py:398: SAWarning: TypeDecorator JSONEncodedText() will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
OptimizationResult()
Scheduler
exposes many options to configure the exact settings of the closed-loop optimization to perform. A few notable ones are:
trial_type
–– currently only Trial
and not BatchTrial
is supported, but support for BatchTrial
-s will follow,tolerated_trial_failure_rate
and min_failed_trials_for_failure_rate_check
–– together these two settings control how the scheduler monitors the failure rate among trial runs it deploys. Once min_failed_trials_for_failure_rate_check
is deployed, the scheduler will start checking whether the ratio of failed to total trials is greater than tolerated_trial_failure_rate
, and if it is, scheduler will exit the optimization with a FailureRateExceededError
,ttl_seconds_for_trials
–– sometimes a failure in a trial run means that it will be difficult to query its status (e.g. due to a crash). If this setting is specified, the Ax Experiment
will automatically mark trials that have been running for too long (more than their 'time-to-live' (TTL) seconds) as failed,run_trials_in_batches
–– if True
, the scheduler will attempt to run trials not by calling Scheduler.run_trial
in a loop, but by calling Scheduler.run_trials
on all ready-to-deploy trials at once. This could allow for saving compute in cases where the deployment operation has large overhead and deploying many trials at once saves compute. Note that using this option successfully will require your scheduler subclass to implement MySchedulerSubclass.run_trials
and MySchedulerSubclass.poll_available_capacity
.The rest of the options is described in the docstring below:
print(SchedulerOptions.__doc__)
Settings for a scheduler instance. Attributes: max_pending_trials: Maximum number of pending trials the scheduler can have ``STAGED`` or ``RUNNING`` at once, required. If looking to use ``Runner.poll_available_capacity`` as a primary guide for how many trials should be pending at a given time, set this limit to a high number, as an upper bound on number of trials that should not be exceeded. trial_type: Type of trials (1-arm ``Trial`` or multi-arm ``Batch Trial``) that will be deployed using the scheduler. Defaults to 1-arm `Trial`. NOTE: use ``BatchTrial`` only if need to evaluate multiple arms *together*, e.g. in an A/B-test influenced by data nonstationarity. For cases where just deploying multiple arms at once is beneficial but the trials are evaluated *independently*, implement ``run_trials`` method in scheduler subclass, to deploy multiple 1-arm trials at the same time. batch_size: If using BatchTrial the number of arms to be generated and deployed per trial. total_trials: Limit on number of trials a given ``Scheduler`` should run. If no stopping criteria are implemented on a given scheduler, exhaustion of this number of trials will be used as default stopping criterion in ``Scheduler.run_all_trials``. Required to be non-null if using ``Scheduler.run_all_trials`` (not required for ``Scheduler.run_n_trials``). tolerated_trial_failure_rate: Fraction of trials in this optimization that are allowed to fail without the whole optimization ending. Expects value between 0 and 1. NOTE: Failure rate checks begin once min_failed_trials_for_failure_rate_check trials have failed; after that point if the ratio of failed trials to total trials ran so far exceeds the failure rate, the optimization will halt. min_failed_trials_for_failure_rate_check: The minimum number of trials that must fail in `Scheduler` in order to start checking failure rate. log_filepath: File, to which to write optimization logs. logging_level: Minimum level of logging statements to log, defaults to ``logging.INFO``. ttl_seconds_for_trials: Optional TTL for all trials created within this ``Scheduler``, in seconds. Trials that remain ``RUNNING`` for more than their TTL seconds will be marked ``FAILED`` once the TTL elapses and may be re-suggested by the Ax optimization models. init_seconds_between_polls: Initial wait between rounds of polling, in seconds. Relevant if using the default wait- for-completed-runs functionality of the base ``Scheduler`` (if ``wait_for_completed_trials_and_report_results`` is not overridden). With the default waiting, every time a poll returns that no trial evaluations completed, wait time will increase; once some completed trial evaluations are found, it will reset back to this value. Specify 0 to not introduce any wait between polls. min_seconds_before_poll: Minimum number of seconds between beginning to run a trial and the first poll to check trial status. timeout_hours: Number of hours after which the optimization will abort. seconds_between_polls_backoff_factor: The rate at which the poll interval increases. run_trials_in_batches: If True and ``poll_available_capacity`` is implemented to return non-null results, trials will be dispatched in groups via `run_trials` instead of one-by-one via ``run_trial``. This allows to save time, IO calls or computation in cases where dispatching trials in groups is more efficient then sequential deployment. The size of the groups will be determined as the minimum of ``self.poll_available_capacity()`` and the number of generator runs that the generation strategy is able to produce without more data or reaching its allowed max paralellism limit. debug_log_run_metadata: Whether to log run_metadata for debugging purposes. early_stopping_strategy: A ``BaseEarlyStoppingStrategy`` that determines whether a trial should be stopped given the current state of the experiment. Used in ``should_stop_trials_early``. global_stopping_strategy: A ``BaseGlobalStoppingStrategy`` that determines whether the full optimization should be stopped or not. suppress_storage_errors_after_retries: Whether to fully suppress SQL storage-related errors if encounted, after retrying the call multiple times. Only use if SQL storage is not important for the given use case, since this will only log, but not raise, an exception if it's encountered while saving to DB or loading from it.
The Scheduler
can report the optimization result to an external system each time there are new completed trials if the user-implemented subclass implements MySchedulerSubclass.report_results
to do so. For example, the folliwing method:
class MySchedulerSubclass(Scheduler):
...
def report_results(self):
write_to_external_database(len(self.experiment.trials))
return (True, {}) # Returns optimization success status and optional dict of outputs.
could be used to record number of trials in experiment so far in an external database.
Since report_results
is an instance method, it has access to self.experiment
and self.generation_strategy
, which contain all the information about the state of the optimization thus far.
run_trials_and_yield_results
generator method¶In some systems it's beneficial to have greater control over Scheduler.run_n_trials
instead of just starting it and needing to wait for it to run all the way to completion before having access to its output. For this purpose, the Scheduler
implements a generator method run_trials_and_yield_results
, which yields the output of Scheduler.report_results
each time there are new completed trials and can be used like so:
class ResultReportingScheduler(Scheduler):
def report_results(self):
return True, {
"trials so far": len(self.experiment.trials),
"currently producing trials from generation step": self.generation_strategy._curr.model_name,
"running trials": [t.index for t in self.running_trials],
}
experiment = make_branin_experiment_with_runner_and_metric()
scheduler = ResultReportingScheduler(
experiment=experiment,
generation_strategy=choose_generation_strategy(
search_space=experiment.search_space,
max_parallelism_cap=3,
),
options=SchedulerOptions(),
)
for reported_result in scheduler.run_trials_and_yield_results(max_trials=6):
print("Reported result: ", reported_result)
[INFO 09-15 17:24:14] ax.core.experiment: The is_test flag has been set to True. This flag is meant purely for development and integration testing purposes. If you are running a live experiment, please set this flag to False [INFO 09-15 17:24:14] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters. [INFO 09-15 17:24:14] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 5 trials, GPEI for subsequent trials]). Iterations after 5 will take longer to generate due to model-fitting. [INFO 09-15 17:24:14] ResultReportingScheduler: `Scheduler` requires experiment to have immutable search space and optimization config. Setting property immutable_search_space_and_opt_config to `True` on experiment. [INFO 09-15 17:24:14] ResultReportingScheduler: Running trials [0]... [INFO 09-15 17:24:15] ResultReportingScheduler: Running trials [1]... [INFO 09-15 17:24:16] ResultReportingScheduler: Running trials [2]... [INFO 09-15 17:24:17] ResultReportingScheduler: Generated all trials that can be generated currently. Max parallelism currently reached. [INFO 09-15 17:24:17] ResultReportingScheduler: Retrieved COMPLETED trials: [1]. [INFO 09-15 17:24:17] ResultReportingScheduler: Fetching data for trials: [1]. [INFO 09-15 17:24:17] ResultReportingScheduler: Running trials [3]...
Reported result: (True, {'trials so far': 3, 'currently producing trials from generation step': 'Sobol', 'running trials': [0, 2]})
[INFO 09-15 17:24:18] ResultReportingScheduler: Generated all trials that can be generated currently. Max parallelism currently reached. [INFO 09-15 17:24:18] ResultReportingScheduler: Retrieved COMPLETED trials: [0, 2, 3]. [INFO 09-15 17:24:18] ResultReportingScheduler: Fetching data for trials: [0, 2, 3]. [INFO 09-15 17:24:18] ResultReportingScheduler: Running trials [4]...
Reported result: (True, {'trials so far': 4, 'currently producing trials from generation step': 'Sobol', 'running trials': []})
[INFO 09-15 17:24:19] ResultReportingScheduler: Running trials [5]... [INFO 09-15 17:24:20] ResultReportingScheduler: Retrieved COMPLETED trials: 4 - 5. [INFO 09-15 17:24:20] ResultReportingScheduler: Fetching data for trials: 4 - 5.
Reported result: (True, {'trials so far': 6, 'currently producing trials from generation step': 'GPEI', 'running trials': []}) Reported result: (True, {'trials so far': 6, 'currently producing trials from generation step': 'GPEI', 'running trials': []})
# Clean up to enable running the tutorial repeatedly with
# the same results. You wouldn't do this if you wanted to
# keep adding data to the same experiment.
from ax.storage.sqa_store.delete import delete_experiment
delete_experiment('branin_test_experiment')
Total runtime of script: 20 seconds.