ax.service¶
Ax Client¶
Managed Loop¶
-
class
ax.service.managed_loop.
OptimizationLoop
(experiment: ax.core.experiment.Experiment, evaluation_function: Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], total_trials: int = 20, arms_per_trial: int = 1, random_seed: Optional[int] = None, wait_time: int = 0, run_async: bool = False, generation_strategy: Optional[ax.modelbridge.generation_strategy.GenerationStrategy] = None)[source]¶ Bases:
object
Managed optimization loop, in which Ax oversees deployment of trials and gathering data.
-
full_run
() → ax.service.managed_loop.OptimizationLoop[source]¶ Runs full optimization loop as defined in the provided optimization plan.
-
get_best_point
() → Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]][source]¶ Obtains the best point encountered in the course of this optimization.
-
get_current_model
() → Optional[ax.modelbridge.base.ModelBridge][source]¶ Obtain the most recently used model in optimization.
-
static
with_evaluation_function
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], evaluation_function: Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, wait_time: int = 0, random_seed: Optional[int] = None, generation_strategy: Optional[ax.modelbridge.generation_strategy.GenerationStrategy] = None) → OptimizationLoop[source]¶ Constructs a synchronous OptimizationLoop using an evaluation function.
-
classmethod
with_runners_and_metrics
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], path_to_runner: str, paths_to_metrics: List[str], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, wait_time: int = 0, random_seed: Optional[int] = None) → OptimizationLoop[source]¶ Constructs an asynchronous OptimizationLoop using Ax runners and metrics.
-
-
ax.service.managed_loop.
optimize
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], evaluation_function: Callable[[Dict[str, Optional[Union[str, bool, float, int]]], Optional[float]], Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], experiment_name: Optional[str] = None, objective_name: Optional[str] = None, minimize: bool = False, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, total_trials: int = 20, arms_per_trial: int = 1, random_seed: Optional[int] = None, generation_strategy: Optional[ax.modelbridge.generation_strategy.GenerationStrategy] = None) → Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]], ax.core.experiment.Experiment, Optional[ax.modelbridge.base.ModelBridge]][source]¶ Construct and run a full optimization loop.
Best Point Identification¶
-
ax.service.utils.best_point.
get_best_from_model_predictions
(experiment: ax.core.experiment.Experiment) → Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶
-
ax.service.utils.best_point.
get_best_from_model_predictions_with_trial_index
(experiment: ax.core.experiment.Experiment) → Optional[Tuple[int, Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶ Given an experiment, returns the best predicted parameterization and corresponding prediction based on the most recent Trial with predictions. If no trials have predictions returns None.
Only some models return predictions. For instance GPEI does while Sobol does not.
- TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters
experiment – Experiment, on which to identify best raw objective arm.
- Returns
Tuple of parameterization and model predictions for it.
-
ax.service.utils.best_point.
get_best_parameters
(experiment: ax.core.experiment.Experiment, use_model_predictions: bool = True) → Optional[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶
-
ax.service.utils.best_point.
get_best_parameters_with_trial_index
(experiment: ax.core.experiment.Experiment, use_model_predictions: bool = True) → Optional[Tuple[int, Dict[str, Optional[Union[str, bool, float, int]]], Optional[Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶ Given an experiment, identifies the best arm.
First attempts according to do so with models used in optimization and its corresponding predictions if available. Falls back to the best raw objective based on the data fetched from the experiment.
- TModelPredictArm is of the form:
({metric_name: mean}, {metric_name_1: {metric_name_2: cov_1_2}})
- Parameters
experiment – Experiment, on which to identify best raw objective arm.
use_model_predictions – Whether to extract the best point using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
- Returns
Tuple of parameterization and model predictions for it.
-
ax.service.utils.best_point.
get_best_raw_objective_point
(experiment: ax.core.experiment.Experiment, optimization_config: Optional[ax.core.optimization_config.OptimizationConfig] = None) → Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Tuple[float, float]]][source]¶
-
ax.service.utils.best_point.
get_best_raw_objective_point_with_trial_index
(experiment: ax.core.experiment.Experiment, optimization_config: Optional[ax.core.optimization_config.OptimizationConfig] = None) → Tuple[int, Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Tuple[float, float]]][source]¶ Given an experiment, identifies the arm that had the best raw objective, based on the data fetched from the experiment.
- Args
experiment: Experiment, on which to identify best raw objective arm. optimization_config: Optimization config to use in absence or in place of
the one stored on the experiment.
- Returns
- Tuple of parameterization and a mapping from metric name to a tuple of
the corresponding objective mean and SEM.
-
ax.service.utils.best_point.
get_pareto_optimal_parameters
(experiment: ax.core.experiment.Experiment, generation_strategy: ax.modelbridge.generation_strategy.GenerationStrategy, use_model_predictions: bool = True) → Optional[Dict[int, Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Tuple[Dict[str, float], Optional[Dict[str, Dict[str, float]]]]]]][source]¶ Identifies the best parameterizations tried in the experiment so far, using model predictions if
use_model_predictions
is true and using observed values from the experiment otherwise. By default, uses model predictions to account for observation noise.NOTE: The format of this method’s output is as follows: { trial_index –> (parameterization, (means, covariances) }, where means are a dictionary of form { metric_name –> metric_mean } and covariances are a nested dictionary of form { one_metric_name –> { another_metric_name: covariance } }.
- Parameters
experiment – Experiment, from which to find Pareto-optimal arms.
generation_strategy – Generation strategy containing the modelbridge.
use_model_predictions – Whether to extract the Pareto frontier using model predictions or directly observed values. If
True
, the metric means and covariances in this method’s output will also be based on model predictions and may differ from the observed values.
- Returns
None
if it was not possible to extract the Pareto frontier, otherwise a mapping from trial index to the tuple of: - the parameterization of the arm in that trial, - two-item tuple of metric means dictionary and covariance matrix(model-predicted if
use_model_predictions=True
and observed otherwise).
Instantiation¶
-
class
ax.service.utils.instantiation.
MetricObjective
(value)[source]¶ Bases:
enum.Enum
An enumeration.
-
MAXIMIZE
= 2¶
-
MINIMIZE
= 1¶
-
-
class
ax.service.utils.instantiation.
ObjectiveProperties
(minimize: bool, threshold: Union[float, NoneType] = None)[source]¶ Bases:
object
-
ax.service.utils.instantiation.
build_objective_threshold
(objective: str, objective_properties: ax.service.utils.instantiation.ObjectiveProperties) → str[source]¶ Constructs constraint string for an objective threshold interpretable by make_experiment()
- Parameters
objective – Name of the objective
objective_properties – Object containing: minimize: Whether this experiment represents a minimization problem. threshold: The bound in the objective’s threshold constraint.
-
ax.service.utils.instantiation.
constraint_from_str
(representation: str, parameters: Dict[str, ax.core.parameter.Parameter]) → ax.core.parameter_constraint.ParameterConstraint[source]¶ Parse string representation of a parameter constraint.
-
ax.service.utils.instantiation.
data_and_evaluations_from_raw_data
(raw_data: Dict[str, Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], metric_names: List[str], trial_index: int, sample_sizes: Dict[str, int], start_time: Optional[int] = None, end_time: Optional[int] = None) → Tuple[Dict[str, Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]]], ax.core.data.Data][source]¶ Transforms evaluations into Ax Data.
Each evaluation is either a trial evaluation: {metric_name -> (mean, SEM)} or a fidelity trial evaluation for multi-fidelity optimizations: [(fidelities, {metric_name -> (mean, SEM)})].
- Parameters
raw_data – Mapping from arm name to raw_data.
metric_names – Names of metrics used to transform raw data to evaluations.
trial_index – Index of the trial, for which the evaluations are.
sample_sizes – Number of samples collected for each arm, may be empty if unavailable.
start_time – Optional start time of run of the trial that produced this data, in milliseconds.
end_time – Optional end time of run of the trial that produced this data, in milliseconds.
-
ax.service.utils.instantiation.
logger
= <Logger ax.service.utils.instantiation (DEBUG)>¶ Utilities for RESTful-like instantiation of Ax classes needed in AxClient.
-
ax.service.utils.instantiation.
make_experiment
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], name: Optional[str] = None, parameter_constraints: Optional[List[str]] = None, outcome_constraints: Optional[List[str]] = None, status_quo: Optional[Dict[str, Optional[Union[str, bool, float, int]]]] = None, experiment_type: Optional[str] = None, tracking_metric_names: Optional[List[str]] = None, objective_name: Optional[str] = None, minimize: bool = False, objectives: Optional[Dict[str, str]] = None, objective_thresholds: Optional[List[str]] = None, support_intermediate_data: bool = False, immutable_search_space_and_opt_config: bool = True, is_test: bool = False) → ax.core.experiment.Experiment[source]¶ Instantiation wrapper that allows for Ax Experiment creation without importing or instantiating any Ax classes.
- Parameters
parameters – List of dictionaries representing parameters in the experiment search space. Required elements in the dictionaries are: 1. “name” (name of parameter, string), 2. “type” (type of parameter: “range”, “fixed”, or “choice”, string), and one of the following: 3a. “bounds” for range parameters (list of two values, lower bound first), 3b. “values” for choice parameters (list of values), or 3c. “value” for fixed parameters (single value). Optional elements are: 1. “log_scale” (for float-valued range parameters, bool), 2. “value_type” (to specify type that values of this parameter should take; expects “float”, “int”, “bool” or “str”), 3. “is_fidelity” (bool) and “target_value” (float) for fidelity parameters, 4. “is_ordered” (bool) for choice parameters, 5. “is_task” (bool) for task parameters, and 6. “digits” (int) for float-valued range parameters.
name – Name of the experiment to be created.
parameter_constraints – List of string representation of parameter constraints, such as “x3 >= x4” or “-x3 + 2*x4 - 3.5*x5 >= 2”. For the latter constraints, any number of arguments is accepted, and acceptable operators are “<=” and “>=”.
outcome_constraints – List of string representation of outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”
status_quo – Parameterization of the current state of the system. If set, this will be added to each trial to be evaluated alongside test configurations.
experiment_type – String indicating type of the experiment (e.g. name of a product in which it is used), if any.
tracking_metric_names – Names of additional tracking metrics not used for optimization.
objective_name – Name of the metric used as objective in this experiment, if experiment is single-objective optimization.
minimize – Whether this experiment represents a minimization problem, if experiment is a single-objective optimization.
objectives – Mapping from an objective name to “minimize” or “maximize” representing the direction for that objective. Used only for multi-objective optimization experiments.
objective_thresholds – A list of objective threshold constraints for multi- objective optimization, in the same string format as outcome_constraints argument.
support_intermediate_data – Whether trials may report metrics results for incomplete runs.
immutable_search_space_and_opt_config – Whether it’s possible to update the search space and optimization config on this experiment after creation. Defaults to True. If set to True, we won’t store or load copies of the search space and optimization config on each generator run, which will improve storage performance.
is_test – Whether this experiment will be a test experiment (useful for marking test experiments in storage etc). Defaults to False.
-
ax.service.utils.instantiation.
make_objective_thresholds
(objective_thresholds: List[str], status_quo_defined: bool) → List[ax.core.outcome_constraint.ObjectiveThreshold][source]¶
-
ax.service.utils.instantiation.
make_objectives
(objectives: Dict[str, str]) → List[ax.core.objective.Objective][source]¶
-
ax.service.utils.instantiation.
make_optimization_config
(objectives: Dict[str, str], objective_thresholds: List[str], outcome_constraints: List[str], status_quo_defined: bool) → ax.core.optimization_config.OptimizationConfig[source]¶
-
ax.service.utils.instantiation.
make_outcome_constraints
(outcome_constraints: List[str], status_quo_defined: bool) → List[ax.core.outcome_constraint.OutcomeConstraint][source]¶
-
ax.service.utils.instantiation.
make_search_space
(parameters: List[Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]], parameter_constraints: List[str]) → ax.core.search_space.SearchSpace[source]¶
-
ax.service.utils.instantiation.
objective_threshold_constraint_from_str
(representation: str) → ax.core.outcome_constraint.ObjectiveThreshold[source]¶
-
ax.service.utils.instantiation.
optimization_config_from_objectives
(objectives: List[ax.core.objective.Objective], objective_thresholds: List[ax.core.outcome_constraint.ObjectiveThreshold], outcome_constraints: List[ax.core.outcome_constraint.OutcomeConstraint]) → ax.core.optimization_config.OptimizationConfig[source]¶ Parse objectives and constraints to define optimization config.
The resulting optimization config will be regular single-objective config if objectives is a list of one element and a multi-objective config otherwise.
NOTE: If passing in multiple objectives, objective_thresholds must be a non-empty list definining constraints for each objective.
-
ax.service.utils.instantiation.
outcome_constraint_from_str
(representation: str) → ax.core.outcome_constraint.OutcomeConstraint[source]¶ Parse string representation of an outcome constraint.
-
ax.service.utils.instantiation.
parameter_from_json
(representation: Dict[str, Union[str, bool, float, int, None, List[Optional[Union[str, bool, float, int]]], Dict[str, List[str]]]]) → ax.core.parameter.Parameter[source]¶ Instantiate a parameter from JSON representation.
-
ax.service.utils.instantiation.
raw_data_to_evaluation
(raw_data: Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]], metric_names: List[str], start_time: Optional[int] = None, end_time: Optional[int] = None) → Union[Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]], float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]], List[Tuple[Dict[str, Optional[Union[str, bool, float, int]]], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]], List[Tuple[Dict[str, Hashable], Dict[str, Union[float, numpy.floating, numpy.integer, Tuple[Union[float, numpy.floating, numpy.integer], Optional[Union[float, numpy.floating, numpy.integer]]]]]]]][source]¶ Format the trial evaluation data to a standard TTrialEvaluation (mapping from metric names to a tuple of mean and SEM) representation, or to a TMapTrialEvaluation.
Note: this function expects raw_data to be data for a Trial, not a BatchedTrial.
Reporting¶
-
ax.service.utils.report_utils.
exp_to_df
(exp: ax.core.experiment.Experiment, metrics: Optional[List[ax.core.metric.Metric]] = None, run_metadata_fields: Optional[List[str]] = None, trial_properties_fields: Optional[List[str]] = None, **kwargs: Any) → pandas.DataFrame[source]¶ Transforms an experiment to a DataFrame with rows keyed by trial_index and arm_name, metrics pivoted into one row. If the pivot results in more than one row per arm (or one row per
arm * map_keys
combination ifmap_keys
are present), results are omitted and warning is produced. Only supportsExperiment
.Transforms an
Experiment
into apd.DataFrame
.- Parameters
exp – An
Experiment
that may have pending trials.metrics – Override list of metrics to return. Return all metrics if
None
.run_metadata_fields – fields to extract from
trial.run_metadata
for trial inexperiment.trials
. If there are multiple arms per trial, these fields will be replicated across the arms of a trial.trial_properties_fields – fields to extract from
trial._properties
for trial inexperiment.trials
. If there are multiple arms per trial, these fields will be replicated across the arms of a trial. Output columns names will be prepended with"trial_properties_"
.**kwargs – Custom named arguments, useful for passing complex objects from call-site to the fetch_data callback.
- Returns
A dataframe of inputs, metadata and metrics by trial and arm (and
map_keys
, if present). If no trials are available, returns an empty dataframe. If no metric ouputs are available, returns a dataframe of inputs and metadata.- Return type
DataFrame
-
ax.service.utils.report_utils.
get_best_trial
(exp: ax.core.experiment.Experiment, additional_metrics: Optional[List[ax.core.metric.Metric]] = None, run_metadata_fields: Optional[List[str]] = None, true_objective_metric_name: Optional[str] = None, true_objective_minimize: Optional[bool] = None, **kwargs: Any) → Optional[pandas.DataFrame][source]¶ Finds the optimal trial given an experiment, based on raw objective value.
Returns a 1-row dataframe. Should match the row of
exp_to_df
with the best raw objective value, given the same arguments.- Parameters
exp – An Experiment that may have pending trials.
additional_metrics – List of metrics to return in addition to the objective metric. Return all metrics if None.
run_metadata_fields – fields to extract from trial.run_metadata for trial in experiment.trials. If there are multiple arms per trial, these fields will be replicated across the arms of a trial.
true_objective_metric_name – objective by which to choose best point (if the objective attached to the experiment is being used as a proxy).
true_objective_minimize – whether the true objective should be minimized instead of maximized. If not present will default to the experiment’s objective’s direction.
**kwargs – Custom named arguments, useful for passing complex objects from call-site to the fetch_data callback.
- Returns
A dataframe of inputs and metrics of the optimal trial.
- Return type
DataFrame
-
ax.service.utils.report_utils.
get_standard_plots
(experiment: ax.core.experiment.Experiment, model: Optional[ax.modelbridge.base.ModelBridge], data: Optional[ax.core.data.Data] = None, model_transitions: Optional[List[int]] = None, true_objective_metric_name: Optional[str] = None) → List[plotly.graph_objs._figure.Figure][source]¶ Extract standard plots for single-objective optimization.
Extracts a list of plots from an
Experiment
andModelBridge
of general interest to an Ax user. Currently not supported are - TODO: multi-objective optimization - TODO: ChoiceParameter plots- Parameters
experiment (-) – The
Experiment
from which to obtain standard plots.model (-) – The
ModelBridge
used to suggest trial parameters.data (-) – If specified, data, to which to fit the model before generating plots.
model_transitions (-) – The arm numbers at which shifts in generation_strategy occur.
- Returns
a plot of objective value vs. trial index, to show experiment progression
a plot of objective value vs. range parameter values, only included if the model associated with generation_strategy can create predictions. This consists of:
a plot_slice plot if the search space contains one range parameter
an interact_contour plot if the search space contains multiple range parameters
WithDBSettingsBase¶
EarlyStopping¶
-
ax.service.utils.early_stopping.
should_stop_trials_early
(early_stopping_strategy: Optional[ax.early_stopping.strategies.BaseEarlyStoppingStrategy], trial_indices: Set[int], experiment: ax.core.experiment.Experiment) → Dict[int, Optional[str]][source]¶ Evaluate whether to early-stop running trials.
- Parameters
early_stopping_strategy – A
BaseEarlyStoppingStrategy
that determines whether a trial should be stopped given the state of an experiment.trial_indices – Indices of trials to consider for early stopping.
experiment – The experiment containing the trials.
- Returns
A dictionary mapping trial indices that should be early stopped to (optional) messages with the associated reason.