ax.global_stopping

Strategies

Base Strategies

class ax.global_stopping.strategies.base.BaseGlobalStoppingStrategy(min_trials: int, inactive_when_pending_trials: bool = True)[source]

Bases: ABC, Base

Interface for strategies used to stop the optimization.

Note that this is different from the BaseEarlyStoppingStrategy, the functionality of which is to decide whether a trial with partial results available during evaluation should be stopped before fully completing. In global early stopping, the decision is about whether or not to stop the overall optimzation altogether (e.g. b/c the expected marginal gains of running additional evaluations do not justify the cost of running these trials).

abstract should_stop_optimization(experiment: Experiment, **kwargs: Any) Tuple[bool, str][source]

Decide whether to stop optimization.

Typical examples include stopping the optimization loop when the objective appears to not improve anymore.

Parameters:

experiment – Experiment that contains the trials and other contextual data.

Returns:

A Tuple with a boolean determining whether the optimization should stop, and a str declaring the reason for stopping.

ImprovementGlobalStoppingStrategy

class ax.global_stopping.strategies.improvement.ImprovementGlobalStoppingStrategy(min_trials: int, window_size: int = 5, improvement_bar: float = 0.1, inactive_when_pending_trials: bool = True)[source]

Bases: BaseGlobalStoppingStrategy

A stopping strategy which stops the optimization if there is no significant improvement over the iterations. For single-objective optimizations, this strategy stops the loop if the feasible (mean) objective has not improved over the past “window_size” iterations. In MOO loops, it stops the optimization loop if the hyper-volume of the pareto front has not improved in the past “window_size” iterations.

should_stop_optimization(experiment: Experiment, trial_to_check: Optional[int] = None, objective_thresholds: Optional[List[ObjectiveThreshold]] = None, **kwargs: Dict[str, Any]) Tuple[bool, str][source]

Check if the optimization has improved in the past “window_size” iterations. For single-objective optimization experiments, it will call _should_stop_single_objective() and for MOO experiments, it will call _should_stop_moo(). Before making either of these calls, this function carries out some sanity checks to handle obvious/invalid cases.

Parameters:
  • experiment – The experiment to apply the strategy on.

  • trial_to_check – The trial in the experiment at which we want to check for stopping. If None, we check at the latest trial.

  • objective_thresholds – Custom objective thresholds to use as reference pooint when computing hv of the pareto front against. This is used only in the MOO setting. If not specified, the objective thresholds on the experiment’s optimization config will be used for the purpose.

Returns:

A Tuple with a boolean determining whether the optimization should stop, and a str declaring the reason for stopping.

ax.global_stopping.strategies.improvement.constraint_satisfaction(trial: Trial) bool[source]

This function checks whether the outcome constraints of the optimization config of an experiment are satisfied in the given trial.

Parameters:

trial – A single-arm Trial at which we want to check the constraint.

Returns:

A boolean which is True iff all outcome constraints are satisifed.