ax.global_stopping

Strategies

Base Strategies

class ax.global_stopping.strategies.base.BaseGlobalStoppingStrategy(min_trials: int, inactive_when_pending_trials: bool = True)[source]

Bases: ABC, Base

Interface for strategies used to stop the optimization.

Note that this is different from the BaseEarlyStoppingStrategy, the functionality of which is to decide whether a trial with partial results available during evaluation should be stopped before fully completing. In global early stopping, the decision is about whether or not to stop the overall optimzation altogether (e.g. b/c the expected marginal gains of running additional evaluations do not justify the cost of running these trials).

estimate_global_stopping_savings(experiment: Experiment, num_remaining_requested_trials: int) float[source]

Estimate global stopping savings by considering the number of requested trials versus the number of trials run before the decision to stop was made.

This is formulated as 1 - (actual_num_trials / total_requested_trials). i.e. 0.11 estimated savings indicates we would expect the experiment to have used 11% more resources without global stopping present.

Returns:

The estimated resource savings as a fraction of total resource usage.

should_stop_optimization(experiment: Experiment, **kwargs: Any) tuple[bool, str][source]

Decide whether to stop optimization.

Parameters:

experiment – Experiment that contains the trials and other contextual data.

Returns:

A Tuple with a boolean determining whether the optimization should stop, and a str declaring the reason for stopping.

ImprovementGlobalStoppingStrategy

class ax.global_stopping.strategies.improvement.ImprovementGlobalStoppingStrategy(min_trials: int, window_size: int = 5, improvement_bar: float = 0.1, inactive_when_pending_trials: bool = True)[source]

Bases: BaseGlobalStoppingStrategy

A Global Stopping Strategy which recommends stopping optimization if there is no significant improvement over recent iterations.

This stopping strategy recommends stopping if there is no significant improvement over the past window_size trials, among those that are feasible (satisfying constraints). The meaning of a “significant” improvement differs between single-objective and multi-objective optimizations. For single-objective optimizations, improvement is as a fraction of the interquartile range (IQR) of the objective values seen so far. For multi-objective optimizations (MOO), improvement is as a fraction of the hypervolume obtained window_size iterations ago.

ax.global_stopping.strategies.improvement.constraint_satisfaction(trial: BaseTrial) bool[source]

This function checks whether the outcome constraints of the optimization config of an experiment are satisfied in the given trial.

Parameters:

trial – A single-arm Trial at which we want to check the constraint.

Returns:

A boolean which is True iff all outcome constraints are satisfied.