ax.plot

Rendering

ax.plot.render.plot_config_to_html(plot_config: ax.plot.base.AxPlotConfig, plot_module_name: str = 'ax.plot', plot_resources: Dict[enum.Enum, str] = {<AxPlotTypes.GENERIC: 1>: 'generic_plotly.js'}, inject_helpers: bool = False) → str[source]

Generate HTML + JS corresponding from a plot config.

Plots

Base

class ax.plot.base.AxPlotConfig[source]

Bases: ax.plot.base._AxPlotConfigBase

Config for plots

class ax.plot.base.AxPlotTypes[source]

Bases: enum.Enum

Enum of Ax plot types.

BANDIT_ROLLOUT = 4
CONTOUR = 0
GENERIC = 1
HTML = 6
INTERACT_CONTOUR = 3
INTERACT_SLICE = 5
SLICE = 2
class ax.plot.base.PlotData[source]

Bases: tuple

Struct for plot data, including both in-sample and out-of-sample arms

property in_sample

Alias for field number 1

property metrics

Alias for field number 0

property out_of_sample

Alias for field number 2

property status_quo_name

Alias for field number 3

class ax.plot.base.PlotInSampleArm[source]

Bases: tuple

Struct for in-sample arms (both observed and predicted data)

property context_stratum

Alias for field number 6

property name

Alias for field number 0

property parameters

Alias for field number 1

property se

Alias for field number 4

property se_hat

Alias for field number 5

property y

Alias for field number 2

property y_hat

Alias for field number 3

class ax.plot.base.PlotMetric[source]

Bases: tuple

Struct for metric

property metric

Alias for field number 0

property pred

Alias for field number 1

property rel

Alias for field number 2

class ax.plot.base.PlotOutOfSampleArm[source]

Bases: tuple

Struct for out-of-sample arms (only predicted data)

property context_stratum

Alias for field number 4

property name

Alias for field number 0

property parameters

Alias for field number 1

property se_hat

Alias for field number 3

property y_hat

Alias for field number 2

Bandit Rollout

ax.plot.bandit_rollout.plot_bandit_rollout(experiment: ax.core.experiment.Experiment) → ax.plot.base.AxPlotConfig[source]

Plot bandit rollout from ane experiement.

Contour Plot

ax.plot.contour.interact_contour(model: ax.modelbridge.base.ModelBridge, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, lower_is_better: bool = False, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → ax.plot.base.AxPlotConfig[source]

Create interactive plot with predictions for a 2-d slice of the parameter space.

Parameters
  • model – ModelBridge that contains model for predictions

  • metric_name – Name of metric to plot

  • generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.

  • relative – Predictions relative to status quo

  • density – Number of points along slice to evaluate predictions.

  • slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters.

  • lower_is_better – Lower values for metric are better.

  • fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.

ax.plot.contour.plot_contour(model: ax.modelbridge.base.ModelBridge, param_x: str, param_y: str, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, lower_is_better: bool = False, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → ax.plot.base.AxPlotConfig[source]

Plot predictions for a 2-d slice of the parameter space.

Parameters
  • model – ModelBridge that contains model for predictions

  • param_x – Name of parameter that will be sliced on x-axis

  • param_y – Name of parameter that will be sliced on y-axis

  • metric_name – Name of metric to plot

  • generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.

  • relative – Predictions relative to status quo

  • density – Number of points along slice to evaluate predictions.

  • slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters.

  • lower_is_better – Lower values for metric are better.

  • fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.

ax.plot.contour.short_name(param_name: str) → str[source]

Feature Importances

ax.plot.feature_importances.plot_feature_importance(df: pandas.DataFrame, title: str) → ax.plot.base.AxPlotConfig[source]
ax.plot.feature_importances.plot_feature_importance_by_feature(model: ax.modelbridge.base.ModelBridge, relative: bool = True) → ax.plot.base.AxPlotConfig[source]

One plot per metric, showing importances by feature.

ax.plot.feature_importances.plot_feature_importance_by_metric(model: ax.modelbridge.base.ModelBridge) → ax.plot.base.AxPlotConfig[source]

One plot per feature, showing importances by metric.

ax.plot.feature_importances.plot_relative_feature_importance(model: ax.modelbridge.base.ModelBridge) → ax.plot.base.AxPlotConfig[source]

Create a stacked bar chart of feature importances per metric

Marginal Effects

ax.plot.marginal_effects.plot_marginal_effects(model: ax.modelbridge.base.ModelBridge, metric: str) → ax.plot.base.AxPlotConfig[source]

Calculates and plots the marginal effects – the effect of changing one factor away from the randomized distribution of the experiment and fixing it at a particular level.

Parameters
  • model – Model to use for estimating effects

  • metric – The metric for which to plot marginal effects.

Returns

AxPlotConfig of the marginal effects

Model Diagnostics

ax.plot.diagnostic.interact_batch_comparison(observations: List[ax.core.observation.Observation], experiment: ax.core.experiment.Experiment, batch_x: int, batch_y: int, rel: bool = False, status_quo_name: Optional[str] = None) → ax.plot.base.AxPlotConfig[source]

Compare repeated arms from two trials; select metric via dropdown.

Parameters
  • observations – List of observations to compute comparison.

  • batch_x – Index of batch for x-axis.

  • batch_y – Index of bach for y-axis.

  • rel – Whether to relativize data against status_quo arm.

  • status_quo_name – Name of the status_quo arm.

ax.plot.diagnostic.interact_cross_validation(cv_results: List[ax.modelbridge.cross_validation.CVResult], show_context: bool = True) → ax.plot.base.AxPlotConfig[source]

Interactive cross-validation (CV) plotting; select metric via dropdown.

Note: uses the Plotly version of dropdown (which means that all data is stored within the notebook).

Parameters
  • cv_results – cross-validation results.

  • show_context – if True, show context on hover.

ax.plot.diagnostic.interact_empirical_model_validation(batch: ax.core.batch_trial.BatchTrial, data: ax.core.data.Data) → ax.plot.base.AxPlotConfig[source]

Compare the model predictions for the batch arms against observed data.

Relies on the model predictions stored on the generator_runs of batch.

Parameters
  • batch – Batch on which to perform analysis.

  • data – Observed data for the batch.

Returns

AxPlotConfig for the plot.

ax.plot.diagnostic.tile_cross_validation(cv_results: List[ax.modelbridge.cross_validation.CVResult], show_arm_details_on_hover: bool = True, show_context: bool = True) → ax.plot.base.AxPlotConfig[source]

Tile version of CV plots; sorted by ‘best fitting’ outcomes.

Plots are sorted in decreasing order using the p-value of a Fisher exact test statistic.

Parameters
  • cv_results – cross-validation results.

  • include_measurement_error – if True, include measurement_error metrics in plot.

  • show_arm_details_on_hover – if True, display parameterizations of arms on hover. Default is True.

  • show_context – if True (default), display context on hover.

Pareto Plots

ax.plot.pareto_frontier.interact_pareto_frontier(frontier_list: List[ax.plot.pareto_utils.ParetoFrontierResults], CI_level: float = 0.9, show_parameterization_on_hover: bool = True) → ax.plot.base.AxPlotConfig[source]

Plot a pareto frontier from a list of objects

ax.plot.pareto_frontier.plot_pareto_frontier(frontier: ax.plot.pareto_utils.ParetoFrontierResults, CI_level: float = 0.9, show_parameterization_on_hover: bool = True) → ax.plot.base.AxPlotConfig[source]

Plot a Pareto frontier from a ParetoFrontierResults object.

Parameters
  • frontier (ParetoFrontierResults) – The results of the Pareto frontier computation.

  • CI_level (float, optional) – The confidence level, i.e. 0.95 (95%)

  • show_parameterization_on_hover (bool, optional) – If True, show the parameterization of the points on the frontier on hover.

Returns

The resulting Plotly plot definition.

Return type

AEPlotConfig

class ax.plot.pareto_utils.COLORS[source]

Bases: enum.Enum

An enumeration.

CORAL = (251, 128, 114)
LIGHT_PURPLE = (190, 186, 218)
ORANGE = (253, 180, 98)
PINK = (188, 128, 189)
STEELBLUE = (128, 177, 211)
TEAL = (141, 211, 199)
class ax.plot.pareto_utils.ParetoFrontierResults[source]

Bases: tuple

Container for results from Pareto frontier computation.

property absolute_metrics

Alias for field number 5

property means

Alias for field number 1

property outcome_constraints

Alias for field number 6

property param_dicts

Alias for field number 0

property primary_metric

Alias for field number 3

property secondary_metric

Alias for field number 4

property sems

Alias for field number 2

ax.plot.pareto_utils.compute_pareto_frontier(experiment: ax.core.experiment.Experiment, primary_objective: ax.core.metric.Metric, secondary_objective: ax.core.metric.Metric, data: Optional[ax.core.data.Data] = None, outcome_constraints: Optional[List[ax.core.outcome_constraint.OutcomeConstraint]] = None, absolute_metrics: Optional[List[str]] = None, num_points: int = 10, trial_index: Optional[int] = None, chebyshev: bool = True) → ax.plot.pareto_utils.ParetoFrontierResults[source]

Compute the Pareto frontier between two objectives. For experiments with batch trials, a trial index must be provided.

Parameters
  • experiment – The experiment to compute a pareto frontier for.

  • primary_objective – The primary objective to optimize.

  • secondary_objective – The secondary objective against which to trade off the primary objective.

  • outcome_constraints – Outcome constraints to be respected by the optimization. Can only contain constraints on metrics that are not primary or secondary objectives.

  • absolute_metrics – List of outcome metrics that should NOT be relativized w.r.t. the status quo (all other outcomes will be in % relative to status_quo).

  • num_points – The number of points to compute on the Pareto frontier.

  • chebyshev – Whether to use augmented_chebyshev_scalarization when computing Pareto Frontier points.

Returns

A NamedTuple with the following fields:
  • param_dicts: The parameter dicts of the

    points generated on the Pareto Frontier.

  • means: The posterior mean predictions of

    the model for each metric (same order as the param dicts).

  • sems: The posterior sem predictions of

    the model for each metric (same order as the param dicts).

  • primary_metric: The name of the primary metric.

  • secondary_metric: The name of the secondary metric.

  • absolute_metrics: List of outcome metrics that

    are NOT be relativized w.r.t. the status quo (all other metrics are in % relative to status_quo).

Return type

ParetoFrontierResults

ax.plot.pareto_utils.rgba(rgb_tuple: Tuple[float], alpha: float = 1) → str[source]

Convert RGB tuple to an RGBA string.

Scatter Plots

ax.plot.scatter.interact_fitted(model: ax.modelbridge.base.ModelBridge, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, show_arm_details_on_hover: bool = True, show_CI: bool = True, arm_noun: str = 'arm', metrics: Optional[List[str]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]

Interactive fitted outcome plots for each arm used in fitting the model.

Choose the outcome to plot using a dropdown.

Parameters
  • model – model to use for predictions.

  • generator_runs_dict – a mapping from generator run name to generator run.

  • rel – if True, use relative effects. Default is True.

  • show_arm_details_on_hover – if True, display parameterizations of arms on hover. Default is True.

  • show_CI – if True, render confidence intervals.

  • arm_noun – noun to use instead of “arm” (e.g. group)

  • metrics – List of metric names to restrict to when plotting.

  • fixed_features – Fixed features to use when making model predictions.

  • data_selector – Function for selecting observations for plotting.

ax.plot.scatter.lattice_multiple_metrics(model: ax.modelbridge.base.ModelBridge, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, show_arm_details_on_hover: bool = False, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]

Plot raw values or predictions of combinations of two metrics for arms.

Parameters
  • model – model to draw predictions from.

  • generator_runs_dict – a mapping from generator run name to generator run.

  • rel – if True, use relative effects. Default is True.

  • show_arm_details_on_hover – if True, display parameterizations of arms on hover. Default is False.

  • data_selector – Function for selecting observations for plotting.

ax.plot.scatter.plot_fitted(model: ax.modelbridge.base.ModelBridge, metric: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, custom_arm_order: Optional[List[str]] = None, custom_arm_order_name: str = 'Custom', show_CI: bool = True, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]

Plot fitted metrics.

Parameters
  • model – model to use for predictions.

  • metric – metric to plot predictions for.

  • generator_runs_dict – a mapping from generator run name to generator run.

  • rel – if True, use relative effects. Default is True.

  • custom_arm_order – a list of arm names in the order corresponding to how they should be plotted on the x-axis. If not None, this is the default ordering.

  • custom_arm_order_name – name for custom ordering to show in the ordering dropdown. Default is ‘Custom’.

  • show_CI – if True, render confidence intervals.

  • data_selector – Function for selecting observations for plotting.

ax.plot.scatter.plot_multiple_metrics(model: ax.modelbridge.base.ModelBridge, metric_x: str, metric_y: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]

Plot raw values or predictions of two metrics for arms.

All arms used in the model are included in the plot. Additional arms can be passed through the generator_runs_dict argument.

Parameters
  • model – model to draw predictions from.

  • metric_x – metric to plot on the x-axis.

  • metric_y – metric to plot on the y-axis.

  • generator_runs_dict – a mapping from generator run name to generator run.

  • rel – if True, use relative effects. Default is True.

  • data_selector – Function for selecting observations for plotting.

ax.plot.scatter.plot_objective_vs_constraints(model: ax.modelbridge.base.ModelBridge, objective: str, subset_metrics: Optional[List[str]] = None, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, infer_relative_constraints: Optional[bool] = False, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]

Plot the tradeoff between an objetive and all other metrics in a model.

All arms used in the model are included in the plot. Additional arms can be passed through via the generator_runs_dict argument.

Fixed features input can be used to override fields of the insample arms when making model predictions.

Parameters
  • model – model to draw predictions from.

  • objective – metric to optimize. Plotted on the x-axis.

  • subset_metrics – list of metrics to plot on the y-axes if need a subset of all metrics in the model.

  • generator_runs_dict – a mapping from generator run name to generator run.

  • rel – if True, use relative effects. Default is True.

  • infer_relative_constraints – if True, read relative spec from model’s optimization config. Absolute constraints will not be relativized; relative ones will be. Objectives will respect the rel parameter. Metrics that are not constraints will be relativized.

  • fixed_features – Fixed features to use when making model predictions.

  • data_selector – Function for selecting observations for plotting.

ax.plot.scatter.tile_fitted(model: ax.modelbridge.base.ModelBridge, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, show_arm_details_on_hover: bool = False, show_CI: bool = True, arm_noun: str = 'arm', metrics: Optional[List[str]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]

Tile version of fitted outcome plots.

Parameters
  • model – model to use for predictions.

  • generator_runs_dict – a mapping from generator run name to generator run.

  • rel – if True, use relative effects. Default is True.

  • show_arm_details_on_hover – if True, display parameterizations of arms on hover. Default is False.

  • show_CI – if True, render confidence intervals.

  • arm_noun – noun to use instead of “arm” (e.g. group)

  • metrics – List of metric names to restrict to when plotting.

  • fixed_features – Fixed features to use when making model predictions.

  • data_selector – Function for selecting observations for plotting.

Slice Plot

ax.plot.slice.interact_slice(model: ax.modelbridge.base.ModelBridge, param_name: str, metric_name: str = '', generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → ax.plot.base.AxPlotConfig[source]

Create interactive plot with predictions for a 1-d slice of the parameter space.

Parameters
  • model – ModelBridge that contains model for predictions

  • param_name – Name of parameter that will be sliced

  • metric_name – Name of metric to plot

  • generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.

  • relative – Predictions relative to status quo

  • density – Number of points along slice to evaluate predictions.

  • slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters. Ignored if fixed_features is specified.

  • fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.

ax.plot.slice.plot_slice(model: ax.modelbridge.base.ModelBridge, param_name: str, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → ax.plot.base.AxPlotConfig[source]

Plot predictions for a 1-d slice of the parameter space.

Parameters
  • model – ModelBridge that contains model for predictions

  • param_name – Name of parameter that will be sliced

  • metric_name – Name of metric to plot

  • generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.

  • relative – Predictions relative to status quo

  • density – Number of points along slice to evaluate predictions.

  • slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters. Ignored if fixed_features is specified.

  • fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.

Table

ax.plot.table_view.get_color(x: float, ci: float, rel: bool, reverse: bool)[source]

Determine the color of the table cell.

ax.plot.table_view.table_view_plot(experiment: ax.core.experiment.Experiment, data: ax.core.data.Data, use_empirical_bayes: bool = True, only_data_frame: bool = False, arm_noun: str = 'arm')[source]

Table of means and confidence intervals.

Table is of the form:

arm

metric_1

metric_2

0_0

mean +- CI

0_1

Trace Plots

ax.plot.trace.get_running_trials_per_minute(experiment: ax.core.experiment.Experiment, show_until_latest_end_plus_timedelta: datetime.timedelta = datetime.timedelta(seconds=300)) → ax.plot.base.AxPlotConfig[source]
ax.plot.trace.mean_trace_scatter(y: numpy.ndarray, trace_color: Tuple[int] = (128, 177, 211), legend_label: str = 'mean', hover_labels: Optional[List[str]] = None) → plotly.graph_objs._scatter.Scatter[source]

Creates a graph object for trace of the mean of the given series across runs.

Parameters
  • y – (r x t) array with results from r runs and t trials.

  • trace_color – tuple of 3 int values representing an RGB color. Defaults to blue.

  • legend_label – label for this trace.

  • hover_labels – optional, text to show on hover; list where the i-th value corresponds to the i-th value in the value of the y argument.

Returns

plotly graph object

Return type

go.Scatter

ax.plot.trace.model_transitions_scatter(model_transitions: List[int], y_range: List[float], generator_change_color: Tuple[int] = (141, 211, 199)) → List[plotly.graph_objs._scatter.Scatter][source]

Creates a graph object for the line(s) representing generator changes.

Parameters
  • model_transitions – iterations, before which generators changed

  • y_range – upper and lower values of the y-range of the plot

  • generator_change_color – tuple of 3 int values representing an RGB color. Defaults to orange.

Returns

plotly graph objects for the lines representing generator

changes

Return type

go.Scatter

ax.plot.trace.optimization_times(fit_times: Dict[str, List[float]], gen_times: Dict[str, List[float]], title: str = '') → ax.plot.base.AxPlotConfig[source]

Plots wall times for each method as a bar chart.

Parameters
  • fit_times – A map from method name to a list of the model fitting times.

  • gen_times – A map from method name to a list of the gen times.

  • title – Title for this plot.

Returns: AxPlotConfig with the plot

ax.plot.trace.optimization_trace_all_methods(y_dict: Dict[str, numpy.ndarray], optimum: Optional[float] = None, title: str = '', ylabel: str = '', hover_labels: Optional[List[str]] = None, trace_colors: List[Tuple[int]] = [(128, 177, 211), (251, 128, 114), (141, 211, 199), (188, 128, 189), (190, 186, 218), (253, 180, 98)], optimum_color: Tuple[int] = (253, 180, 98)) → ax.plot.base.AxPlotConfig[source]

Plots a comparison of optimization traces with 2-SEM bands for multiple methods on the same problem.

Parameters
  • y – a mapping of method names to (r x t) arrays, where r is the number of runs in the test, and t is the number of trials.

  • optimum – value of the optimal objective.

  • title – title for this plot.

  • ylabel – label for the Y-axis.

  • hover_labels – optional, text to show on hover; list where the i-th value corresponds to the i-th value in the value of the y argument.

  • trace_colors – tuples of 3 int values representing RGB colors to use for different methods shown in the combination plot. Defaults to Ax discrete color scale.

  • optimum_color – tuple of 3 int values representing an RGB color. Defaults to orange.

Returns

plot of the comparison of optimization traces with IQR

Return type

AxPlotConfig

ax.plot.trace.optimization_trace_single_method(y: numpy.ndarray, optimum: Optional[float] = None, model_transitions: Optional[List[int]] = None, title: str = '', ylabel: str = '', hover_labels: Optional[List[str]] = None, trace_color: Tuple[int] = (128, 177, 211), optimum_color: Tuple[int] = (253, 180, 98), generator_change_color: Tuple[int] = (141, 211, 199)) → ax.plot.base.AxPlotConfig[source]

Plots an optimization trace with mean and 2 SEMs

Parameters
  • y – (r x t) array; result to plot, with r runs and t trials

  • optimum – value of the optimal objective

  • model_transitions – iterations, before which generators changed

  • title – title for this plot.

  • ylabel – label for the Y-axis.

  • hover_labels – optional, text to show on hover; list where the i-th value corresponds to the i-th value in the value of the y argument.

  • trace_color – tuple of 3 int values representing an RGB color. Defaults to orange.

  • optimum_color – tuple of 3 int values representing an RGB color. Defaults to orange.

  • generator_change_color – tuple of 3 int values representing an RGB color. Defaults to orange.

Returns

plot of the optimization trace with IQR

Return type

AxPlotConfig

ax.plot.trace.optimum_objective_scatter(optimum: float, num_iterations: int, optimum_color: Tuple[int] = (253, 180, 98)) → plotly.graph_objs._scatter.Scatter[source]

Creates a graph object for the line representing optimal objective.

Parameters
  • optimum – value of the optimal objective

  • num_iterations – how many trials were in the optimization (used to determine the width of the plot)

  • trace_color – tuple of 3 int values representing an RGB color. Defaults to orange.

Returns

plotly graph objects for the optimal objective line

Return type

go.Scatter

ax.plot.trace.sem_range_scatter(y: numpy.ndarray, trace_color: Tuple[int] = (128, 177, 211), legend_label: str = '') → Tuple[plotly.graph_objs._scatter.Scatter, plotly.graph_objs._scatter.Scatter][source]

Creates a graph object for trace of mean +/- 2 SEMs for y, across runs.

Parameters
  • y – (r x t) array with results from r runs and t trials.

  • trace_color – tuple of 3 int values representing an RGB color. Defaults to blue.

  • legend_label – Label for the legend group.

Returns

plotly graph objects for lower and upper bounds

Return type

Tuple[go.Scatter]

Plotting Utilities

class ax.plot.color.COLORS[source]

Bases: enum.Enum

An enumeration.

CORAL = (251, 128, 114)
LIGHT_PURPLE = (190, 186, 218)
ORANGE = (253, 180, 98)
PINK = (188, 128, 189)
STEELBLUE = (128, 177, 211)
TEAL = (141, 211, 199)
ax.plot.color.plotly_color_scale(list_of_rgb_tuples: List[Tuple[float]], reverse: bool = False, alpha: float = 1) → List[Tuple[float, str]][source]

Convert list of RGB tuples to list of tuples, where each tuple is break in [0, 1] and stringified RGBA color.

ax.plot.color.rgba(rgb_tuple: Tuple[float], alpha: float = 1) → str[source]

Convert RGB tuple to an RGBA string.

ax.plot.exp_utils.exp_to_df(exp: ax.core.experiment.Experiment, metrics: Optional[List[ax.core.metric.Metric]] = None, key_components: Optional[List[str]] = None, **kwargs: Any) → pandas.DataFrame[source]

Transforms an experiment to a DataFrame. Only supports Experiment and SimpleExperiment.

Transforms an Experiment into a dataframe with rows keyed by trial_index and arm_name, metrics pivoted into one row.

Parameters
  • exp – An Experiment that may have pending trials.

  • metrics – Override list of metrics to return. Return all metrics if None.

  • key_components – fields that combine to make a unique key corresponding to rows, similar to the list of fields passed to a GROUP BY. Defaults to [‘arm_name’, ‘trial_index’].

  • **kwargs – Custom named arguments, useful for passing complex objects from call-site to the fetch_data callback.

Returns

A dataframe of inputs and metrics by trial and arm.

Return type

DataFrame

ax.plot.helper.arm_name_to_tuple(arm_name: str) → Union[Tuple[int, int], Tuple[int]][source]
ax.plot.helper.axis_range(grid: List[float], is_log: bool) → List[float][source]
ax.plot.helper.build_filter_trial(keep_trial_indices: List[int]) → Callable[[ax.core.observation.Observation], bool][source]

Creates a callable that filters observations based on trial_index

ax.plot.helper.contour_config_to_trace(config)[source]
ax.plot.helper.get_fixed_values(model: ax.modelbridge.base.ModelBridge, slice_values: Optional[Dict[str, Any]] = None, trial_index: Optional[int] = None) → Dict[str, Union[str, bool, float, int, None]][source]

Get fixed values for parameters in a slice plot.

If there is an in-design status quo, those values will be used. Otherwise, the mean of RangeParameters or the mode of ChoiceParameters is used.

Any value in slice_values will override the above.

Parameters
  • model – ModelBridge being used for plotting

  • slice_values – Map from parameter name to value at which is should be fixed.

Returns: Map from parameter name to fixed value.

ax.plot.helper.get_grid_for_parameter(parameter: ax.core.parameter.RangeParameter, density: int) → numpy.ndarray[source]

Get a grid of points along the range of the parameter.

Will be a log-scale grid if parameter is log scale.

Parameters
  • parameter – Parameter for which to generate grid.

  • density – Number of points in the grid.

ax.plot.helper.get_plot_data(model: ax.modelbridge.base.ModelBridge, generator_runs_dict: Dict[str, ax.core.generator_run.GeneratorRun], metric_names: Optional[Set[str]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → Tuple[ax.plot.base.PlotData, List[Dict[str, Union[str, float]]], Dict[str, Dict[str, Union[str, bool, float, int, None]]]][source]

Format data object with metrics for in-sample and out-of-sample arms.

Calculate both observed and predicted metrics for in-sample arms. Calculate predicted metrics for out-of-sample arms passed via the generator_runs_dict argument.

In PlotData, in-sample observations are merged with IVW. In RawData, they are left un-merged and given as a list of dictionaries, one for each observation and having keys ‘arm_name’, ‘mean’, and ‘sem’.

Parameters
  • model – The model.

  • generator_runs_dict – a mapping from generator run name to generator run.

  • metric_names – Restrict predictions to this set. If None, all metrics in the model will be returned.

  • fixed_features – Fixed features to use when making model predictions.

  • data_selector – Function for selecting observations for plotting.

Returns

A tuple containing

  • PlotData object with in-sample and out-of-sample predictions.

  • List of observations like:

    {'metric_name': 'likes', 'arm_name': '0_1', 'mean': 1., 'sem': 0.1}.
    
  • Mapping from arm name to parameters.

ax.plot.helper.get_range_parameter(model: ax.modelbridge.base.ModelBridge, param_name: str) → ax.core.parameter.RangeParameter[source]

Get the range parameter with the given name from the model.

Throws if parameter doesn’t exist or is not a range parameter.

Parameters
  • model – The model.

  • param_name – The name of the RangeParameter to be found.

Returns: The RangeParameter named param_name.

ax.plot.helper.get_range_parameters(model: ax.modelbridge.base.ModelBridge) → List[ax.core.parameter.RangeParameter][source]

Get a list of range parameters from a model.

Parameters

model – The model.

Returns: List of RangeParameters.

ax.plot.helper.infer_is_relative(model: ax.modelbridge.base.ModelBridge, metrics: List[str], non_constraint_rel: bool) → Dict[str, bool][source]

Determine whether or not to relativize a metric.

Metrics that are constraints will get this decision from their relative flag. Other metrics will use the default_rel.

Parameters
  • model – model fit on metrics.

  • metrics – list of metric names.

  • non_constraint_rel – whether or not to relativize non-constraint metrics

Returns

Dict[str, bool] containing whether or not to relativize each input metric.

ax.plot.helper.relativize(m_t: float, sem_t: float, m_c: float, sem_c: float) → List[float][source]
ax.plot.helper.relativize_data(f: List[float], sd: List[float], rel: bool, arm_data: Dict[Any, Any], metric: str) → List[List[float]][source]
ax.plot.helper.resize_subtitles(figure: Dict[str, Any], size: int)[source]
ax.plot.helper.rgb(arr: List[int]) → str[source]
ax.plot.helper.slice_config_to_trace(arm_data, arm_name_to_parameters, f, fit_data, grid, metric, param, rel, setx, sd, is_log, visible)[source]