ax.plot¶
Rendering¶
Plots¶
Base¶
-
class
ax.plot.base.
AxPlotConfig
(data: Dict[str, Any], plot_type: enum.Enum)[source]¶ Bases:
ax.plot.base._AxPlotConfigBase
Config for plots
-
class
ax.plot.base.
AxPlotTypes
(value)[source]¶ Bases:
enum.Enum
Enum of Ax plot types.
-
BANDIT_ROLLOUT
= 4¶
-
CONTOUR
= 0¶
-
GENERIC
= 1¶
-
HTML
= 6¶
-
INTERACT_CONTOUR
= 3¶
-
INTERACT_SLICE
= 5¶
-
SLICE
= 2¶
-
-
class
ax.plot.base.
PlotData
(metrics: List[str], in_sample: Dict[str, ax.plot.base.PlotInSampleArm], out_of_sample: Optional[Dict[str, Dict[str, ax.plot.base.PlotOutOfSampleArm]]], status_quo_name: Optional[str])[source]¶ Bases:
tuple
Struct for plot data, including both in-sample and out-of-sample arms
-
property
in_sample
¶ Alias for field number 1
-
property
metrics
¶ Alias for field number 0
-
property
out_of_sample
¶ Alias for field number 2
-
property
status_quo_name
¶ Alias for field number 3
-
property
-
class
ax.plot.base.
PlotInSampleArm
(name: str, parameters: Dict[str, Optional[Union[str, bool, float, int]]], y: Dict[str, float], y_hat: Dict[str, float], se: Dict[str, float], se_hat: Dict[str, float], context_stratum: Optional[Dict[str, Union[str, float]]])[source]¶ Bases:
tuple
Struct for in-sample arms (both observed and predicted data)
-
property
context_stratum
¶ Alias for field number 6
-
property
name
¶ Alias for field number 0
-
property
parameters
¶ Alias for field number 1
-
property
se
¶ Alias for field number 4
-
property
se_hat
¶ Alias for field number 5
-
property
y
¶ Alias for field number 2
-
property
y_hat
¶ Alias for field number 3
-
property
-
class
ax.plot.base.
PlotMetric
(metric: str, pred: bool, rel: bool)[source]¶ Bases:
tuple
Struct for metric
-
property
metric
¶ Alias for field number 0
-
property
pred
¶ Alias for field number 1
-
property
rel
¶ Alias for field number 2
-
property
-
class
ax.plot.base.
PlotOutOfSampleArm
(name: str, parameters: Dict[str, Optional[Union[str, bool, float, int]]], y_hat: Dict[str, float], se_hat: Dict[str, float], context_stratum: Optional[Dict[str, Union[str, float]]])[source]¶ Bases:
tuple
Struct for out-of-sample arms (only predicted data)
-
property
context_stratum
¶ Alias for field number 4
-
property
name
¶ Alias for field number 0
-
property
parameters
¶ Alias for field number 1
-
property
se_hat
¶ Alias for field number 3
-
property
y_hat
¶ Alias for field number 2
-
property
Bandit Rollout¶
-
ax.plot.bandit_rollout.
plot_bandit_rollout
(experiment: ax.core.experiment.Experiment) → ax.plot.base.AxPlotConfig[source]¶ Plot bandit rollout from ane experiement.
Contour Plot¶
-
ax.plot.contour.
interact_contour
(model: ax.modelbridge.base.ModelBridge, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, lower_is_better: bool = False, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → ax.plot.base.AxPlotConfig[source]¶ Create interactive plot with predictions for a 2-d slice of the parameter space.
- Parameters
model – ModelBridge that contains model for predictions
metric_name – Name of metric to plot
generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.
relative – Predictions relative to status quo
density – Number of points along slice to evaluate predictions.
slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters.
lower_is_better – Lower values for metric are better.
fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.
- Returns
interactive plot of objective vs. parameters
- Return type
-
ax.plot.contour.
interact_contour_plotly
(model: ax.modelbridge.base.ModelBridge, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, lower_is_better: bool = False, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → plotly.graph_objs._figure.Figure[source]¶ Create interactive plot with predictions for a 2-d slice of the parameter space.
- Parameters
model – ModelBridge that contains model for predictions
metric_name – Name of metric to plot
generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.
relative – Predictions relative to status quo
density – Number of points along slice to evaluate predictions.
slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters.
lower_is_better – Lower values for metric are better.
fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.
- Returns
interactive plot of objective vs. parameters
- Return type
go.Figure
-
ax.plot.contour.
plot_contour
(model: ax.modelbridge.base.ModelBridge, param_x: str, param_y: str, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, lower_is_better: bool = False, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → ax.plot.base.AxPlotConfig[source]¶ Plot predictions for a 2-d slice of the parameter space.
- Parameters
model – ModelBridge that contains model for predictions
param_x – Name of parameter that will be sliced on x-axis
param_y – Name of parameter that will be sliced on y-axis
metric_name – Name of metric to plot
generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.
relative – Predictions relative to status quo
density – Number of points along slice to evaluate predictions.
slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters.
lower_is_better – Lower values for metric are better.
fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.
- Returns
contour plot of objective vs. parameter values
- Return type
-
ax.plot.contour.
plot_contour_plotly
(model: ax.modelbridge.base.ModelBridge, param_x: str, param_y: str, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, lower_is_better: bool = False, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → plotly.graph_objs._figure.Figure[source]¶ Plot predictions for a 2-d slice of the parameter space.
- Parameters
model – ModelBridge that contains model for predictions
param_x – Name of parameter that will be sliced on x-axis
param_y – Name of parameter that will be sliced on y-axis
metric_name – Name of metric to plot
generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.
relative – Predictions relative to status quo
density – Number of points along slice to evaluate predictions.
slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters.
lower_is_better – Lower values for metric are better.
fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.
- Returns
contour plot of objective vs. parameter values
- Return type
go.Figure
Feature Importances¶
-
ax.plot.feature_importances.
plot_feature_importance
(df: pandas.DataFrame, title: str) → ax.plot.base.AxPlotConfig[source]¶ Wrapper method to convert plot_feature_importance_plotly to AxPlotConfig
-
ax.plot.feature_importances.
plot_feature_importance_by_feature
(model: ax.modelbridge.base.ModelBridge, relative: bool = True) → ax.plot.base.AxPlotConfig[source]¶ Wrapper method to convert plot_feature_importance_by_feature_plotly to AxPlotConfig
-
ax.plot.feature_importances.
plot_feature_importance_by_feature_plotly
(model: ax.modelbridge.base.ModelBridge, relative: bool = True) → plotly.graph_objs._figure.Figure[source]¶ One plot per metric, showing importances by feature.
-
ax.plot.feature_importances.
plot_feature_importance_by_metric
(model: ax.modelbridge.base.ModelBridge) → ax.plot.base.AxPlotConfig[source]¶ Wrapper method to convert plot_feature_importance_by_metric_plotly to AxPlotConfig
-
ax.plot.feature_importances.
plot_feature_importance_by_metric_plotly
(model: ax.modelbridge.base.ModelBridge) → plotly.graph_objs._figure.Figure[source]¶ One plot per feature, showing importances by metric.
-
ax.plot.feature_importances.
plot_feature_importance_plotly
(df: pandas.DataFrame, title: str) → plotly.graph_objs._figure.Figure[source]¶
-
ax.plot.feature_importances.
plot_relative_feature_importance
(model: ax.modelbridge.base.ModelBridge) → ax.plot.base.AxPlotConfig[source]¶ Wrapper method to convert plot_relative_feature_importance_plotly to AxPlotConfig
-
ax.plot.feature_importances.
plot_relative_feature_importance_plotly
(model: ax.modelbridge.base.ModelBridge) → plotly.graph_objs._figure.Figure[source]¶ Create a stacked bar chart of feature importances per metric
Marginal Effects¶
-
ax.plot.marginal_effects.
plot_marginal_effects
(model: ax.modelbridge.base.ModelBridge, metric: str) → ax.plot.base.AxPlotConfig[source]¶ Calculates and plots the marginal effects – the effect of changing one factor away from the randomized distribution of the experiment and fixing it at a particular level.
- Parameters
model – Model to use for estimating effects
metric – The metric for which to plot marginal effects.
- Returns
AxPlotConfig of the marginal effects
Model Diagnostics¶
-
ax.plot.diagnostic.
interact_batch_comparison
(observations: List[ax.core.observation.Observation], experiment: ax.core.experiment.Experiment, batch_x: int, batch_y: int, rel: bool = False, status_quo_name: Optional[str] = None) → ax.plot.base.AxPlotConfig[source]¶ Compare repeated arms from two trials; select metric via dropdown.
- Parameters
observations – List of observations to compute comparison.
batch_x – Index of batch for x-axis.
batch_y – Index of bach for y-axis.
rel – Whether to relativize data against status_quo arm.
status_quo_name – Name of the status_quo arm.
-
ax.plot.diagnostic.
interact_cross_validation
(cv_results: List[ax.modelbridge.cross_validation.CVResult], show_context: bool = True) → ax.plot.base.AxPlotConfig[source]¶ Interactive cross-validation (CV) plotting; select metric via dropdown.
Note: uses the Plotly version of dropdown (which means that all data is stored within the notebook).
- Parameters
cv_results – cross-validation results.
show_context – if True, show context on hover.
Returns an AxPlotConfig
-
ax.plot.diagnostic.
interact_cross_validation_plotly
(cv_results: List[ax.modelbridge.cross_validation.CVResult], show_context: bool = True) → plotly.graph_objs._figure.Figure[source]¶ Interactive cross-validation (CV) plotting; select metric via dropdown.
Note: uses the Plotly version of dropdown (which means that all data is stored within the notebook).
- Parameters
cv_results – cross-validation results.
show_context – if True, show context on hover.
Returns a plotly.graph_objects.Figure
-
ax.plot.diagnostic.
interact_empirical_model_validation
(batch: ax.core.batch_trial.BatchTrial, data: ax.core.data.Data) → ax.plot.base.AxPlotConfig[source]¶ Compare the model predictions for the batch arms against observed data.
Relies on the model predictions stored on the generator_runs of batch.
- Parameters
batch – Batch on which to perform analysis.
data – Observed data for the batch.
- Returns
AxPlotConfig for the plot.
-
ax.plot.diagnostic.
tile_cross_validation
(cv_results: List[ax.modelbridge.cross_validation.CVResult], show_arm_details_on_hover: bool = True, show_context: bool = True) → ax.plot.base.AxPlotConfig[source]¶ Tile version of CV plots; sorted by ‘best fitting’ outcomes.
Plots are sorted in decreasing order using the p-value of a Fisher exact test statistic.
- Parameters
cv_results – cross-validation results.
include_measurement_error – if True, include measurement_error metrics in plot.
show_arm_details_on_hover – if True, display parameterizations of arms on hover. Default is True.
show_context – if True (default), display context on hover.
Returns a plotly.graph_objects.Figure
Pareto Plots¶
-
ax.plot.pareto_frontier.
interact_multiple_pareto_frontier
(frontier_lists: Dict[str, List[ax.plot.pareto_utils.ParetoFrontierResults]], CI_level: float = 0.9, show_parameterization_on_hover: bool = True) → ax.plot.base.AxPlotConfig[source]¶ Plot a Pareto frontiers from a list of lists of NamedParetoFrontierResults objects that we want to compare.
- Parameters
frontier_lists (Dict[List[ParetoFrontierResults]]) – A dictionary of multiple lists of Pareto frontier computation results to plot for comparison. Each list of ParetoFrontierResults contains a list of the results of the same pareto frontier but under different pairs of metrics. Different List[ParetoFrontierResults] must contain the the same pairs of metrics for this function to work.
CI_level (float, optional) – The confidence level, i.e. 0.95 (95%)
show_parameterization_on_hover (bool, optional) – If True, show the parameterization of the points on the frontier on hover.
- Returns
The resulting Plotly plot definition.
- Return type
AEPlotConfig
-
ax.plot.pareto_frontier.
interact_pareto_frontier
(frontier_list: List[ax.plot.pareto_utils.ParetoFrontierResults], CI_level: float = 0.9, show_parameterization_on_hover: bool = True) → ax.plot.base.AxPlotConfig[source]¶ Plot a pareto frontier from a list of objects
-
ax.plot.pareto_frontier.
plot_multiple_pareto_frontiers
(frontiers: Dict[str, ax.plot.pareto_utils.ParetoFrontierResults], CI_level: float = 0.9, show_parameterization_on_hover: bool = True) → ax.plot.base.AxPlotConfig[source]¶ Plot a Pareto frontier from a ParetoFrontierResults object.
- Parameters
frontiers (Dict[str, ParetoFrontierResults]) – The results of the Pareto frontier computation.
CI_level (float, optional) – The confidence level, i.e. 0.95 (95%)
show_parameterization_on_hover (bool, optional) – If True, show the parameterization of the points on the frontier on hover.
- Returns
The resulting Plotly plot definition.
- Return type
AEPlotConfig
-
ax.plot.pareto_frontier.
plot_pareto_frontier
(frontier: ax.plot.pareto_utils.ParetoFrontierResults, CI_level: float = 0.9, show_parameterization_on_hover: bool = True) → ax.plot.base.AxPlotConfig[source]¶ Plot a Pareto frontier from a ParetoFrontierResults object.
- Parameters
frontier (ParetoFrontierResults) – The results of the Pareto frontier computation.
CI_level (float, optional) – The confidence level, i.e. 0.95 (95%)
show_parameterization_on_hover (bool, optional) – If True, show the parameterization of the points on the frontier on hover.
- Returns
The resulting Plotly plot definition.
- Return type
AEPlotConfig
-
ax.plot.pareto_frontier.
scatter_plot_with_pareto_frontier
(Y: numpy.ndarray, Y_pareto: numpy.ndarray, metric_x: str, metric_y: str, reference_point: Tuple[float, float], minimize: bool = True) → ax.plot.base.AxPlotConfig[source]¶
-
ax.plot.pareto_frontier.
scatter_plot_with_pareto_frontier_plotly
(Y: numpy.ndarray, Y_pareto: numpy.ndarray, metric_x: str, metric_y: str, reference_point: Tuple[float, float], minimize: bool = True) → plotly.graph_objs._figure.Figure[source]¶ Plots a scatter of all points in
Y
formetric_x
andmetric_y
with a reference point and Pareto frontier fromY_pareto
.Points in the scatter are colored in a gradient representing their trial index, with metric_x on x-axis and metric_y on y-axis. Reference point is represented as a star and Pareto frontier –– as a line. The frontier connects to the reference point via projection lines.
NOTE: Both metrics should have the same minimization setting, passed as minimize.
- Parameters
Y – Array of outcomes, of which the first two will be plotted.
Y_pareto – Array of Pareto-optimal points, first two outcomes in which will be plotted.
metric_x – Name of first outcome in
Y
.metric_Y – Name of second outcome in
Y
.reference_point – Reference point for
metric_x
andmetric_y
.minimize – Whether the two metrics in the plot are being minimized or maximized.
-
class
ax.plot.pareto_utils.
ParetoFrontierResults
(param_dicts: List[Dict[str, Optional[Union[str, bool, float, int]]]], means: Dict[str, List[float]], sems: Dict[str, List[float]], primary_metric: str, secondary_metric: str, absolute_metrics: List[str], objective_thresholds: Optional[Dict[str, float]], arm_names: Optional[List[Optional[str]]])[source]¶ Bases:
tuple
Container for results from Pareto frontier computation.
Fields are: - param_dicts: The parameter dicts of the points generated on the Pareto Frontier. - means: The posterior mean predictions of the model for each metric (same order as the param dicts). These must be as a percent change relative to status quo for any metric not listed in absolute_metrics. - sems: The posterior sem predictions of the model for each metric (same order as the param dicts). Also must be relativized wrt status quo for any metric not listed in absolute_metrics. - primary_metric: The name of the primary metric. - secondary_metric: The name of the secondary metric. - absolute_metrics: List of outcome metrics that are NOT be relativized w.r.t. the status quo. All other metrics are assumed to be given here as % relative to status_quo. - objective_thresholds: Threshold for each objective. Must be on the same scale as means, so if means is relativized it should be the relative value, otherwise it should be absolute. - arm_names: Optional list of arm names for each parameterization.
-
property
absolute_metrics
¶ Alias for field number 5
-
property
arm_names
¶ Alias for field number 7
-
property
means
¶ Alias for field number 1
-
property
objective_thresholds
¶ Alias for field number 6
-
property
param_dicts
¶ Alias for field number 0
-
property
primary_metric
¶ Alias for field number 3
-
property
secondary_metric
¶ Alias for field number 4
-
property
sems
¶ Alias for field number 2
-
property
-
ax.plot.pareto_utils.
compute_posterior_pareto_frontier
(experiment: ax.core.experiment.Experiment, primary_objective: ax.core.metric.Metric, secondary_objective: ax.core.metric.Metric, data: Optional[ax.core.data.Data] = None, outcome_constraints: Optional[List[ax.core.outcome_constraint.OutcomeConstraint]] = None, absolute_metrics: Optional[List[str]] = None, num_points: int = 10, trial_index: Optional[int] = None, chebyshev: bool = True) → ax.plot.pareto_utils.ParetoFrontierResults[source]¶ Compute the Pareto frontier between two objectives. For experiments with batch trials, a trial index or data object must be provided.
This is done by fitting a GP and finding the pareto front according to the GP posterior mean.
- Parameters
experiment – The experiment to compute a pareto frontier for.
primary_objective – The primary objective to optimize.
secondary_objective – The secondary objective against which to trade off the primary objective.
outcome_constraints – Outcome constraints to be respected by the optimization. Can only contain constraints on metrics that are not primary or secondary objectives.
absolute_metrics – List of outcome metrics that should NOT be relativized w.r.t. the status quo (all other outcomes will be in % relative to status_quo).
num_points – The number of points to compute on the Pareto frontier.
chebyshev – Whether to use augmented_chebyshev_scalarization when computing Pareto Frontier points.
- Returns
A NamedTuple with fields listed in its definition.
- Return type
-
ax.plot.pareto_utils.
get_observed_pareto_frontiers
(experiment: ax.core.experiment.Experiment, data: Optional[ax.core.data.Data] = None, rel: bool = True, arm_names: Optional[List[str]] = None) → List[ax.plot.pareto_utils.ParetoFrontierResults][source]¶ Find all Pareto points from an experiment.
Uses only values as observed in the data; no modeling is involved. Makes no assumption about the search space or types of parameters. If “data” is provided will use that, otherwise will use all data attached to the experiment.
Uses all arms present in data; does not filter according to experiment search space. If arm_names is specified, will filter to just those arm whose names are given in the list.
Assumes experiment has a multiobjective optimization config from which the objectives and outcome constraints will be extracted.
Will generate a ParetoFrontierResults for every pair of metrics in the experiment’s multiobjective optimization config.
- Parameters
experiment – The experiment.
data – Data to use for computing Pareto frontier. If not provided, will fetch data from experiment.
rel – Relativize, if status quo on experiment.
arm_names – If provided, computes Pareto frontier only from among the provided list of arm names.
Returns: ParetoFrontierResults that can be used with interact_pareto_frontier.
-
ax.plot.pareto_utils.
get_tensor_converter_model
(experiment: ax.core.experiment.Experiment, data: ax.core.data.Data) → ax.modelbridge.torch.TorchModelBridge[source]¶ Constructs a minimal model for converting things to tensors.
Model fitting will instantiate all of the transforms but will not do any expensive (i.e. GP) fitting beyond that. The model will raise an error if it is used for predicting or generating.
Will work for any search space regardless of types of parameters.
- Parameters
experiment – Experiment.
data – Data for fitting the model.
Returns: A torch modelbridge with transforms set.
Scatter Plots¶
-
ax.plot.scatter.
interact_fitted
(model: ax.modelbridge.base.ModelBridge, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, show_arm_details_on_hover: bool = True, show_CI: bool = True, arm_noun: str = 'arm', metrics: Optional[List[str]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]¶ Interactive fitted outcome plots for each arm used in fitting the model.
Choose the outcome to plot using a dropdown.
- Parameters
model – model to use for predictions.
generator_runs_dict – a mapping from generator run name to generator run.
rel – if True, use relative effects. Default is True.
show_arm_details_on_hover – if True, display parameterizations of arms on hover. Default is True.
show_CI – if True, render confidence intervals.
arm_noun – noun to use instead of “arm” (e.g. group)
metrics – List of metric names to restrict to when plotting.
fixed_features – Fixed features to use when making model predictions.
data_selector – Function for selecting observations for plotting.
-
ax.plot.scatter.
lattice_multiple_metrics
(model: ax.modelbridge.base.ModelBridge, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, show_arm_details_on_hover: bool = False, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]¶ Plot raw values or predictions of combinations of two metrics for arms.
- Parameters
model – model to draw predictions from.
generator_runs_dict – a mapping from generator run name to generator run.
rel – if True, use relative effects. Default is True.
show_arm_details_on_hover – if True, display parameterizations of arms on hover. Default is False.
data_selector – Function for selecting observations for plotting.
-
ax.plot.scatter.
plot_fitted
(model: ax.modelbridge.base.ModelBridge, metric: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, custom_arm_order: Optional[List[str]] = None, custom_arm_order_name: str = 'Custom', show_CI: bool = True, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]¶ Plot fitted metrics.
- Parameters
model – model to use for predictions.
metric – metric to plot predictions for.
generator_runs_dict – a mapping from generator run name to generator run.
rel – if True, use relative effects. Default is True.
custom_arm_order – a list of arm names in the order corresponding to how they should be plotted on the x-axis. If not None, this is the default ordering.
custom_arm_order_name – name for custom ordering to show in the ordering dropdown. Default is ‘Custom’.
show_CI – if True, render confidence intervals.
data_selector – Function for selecting observations for plotting.
-
ax.plot.scatter.
plot_multiple_metrics
(model: ax.modelbridge.base.ModelBridge, metric_x: str, metric_y: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel_x: bool = True, rel_y: bool = True, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None, **kwargs: Any) → ax.plot.base.AxPlotConfig[source]¶ Plot raw values or predictions of two metrics for arms.
All arms used in the model are included in the plot. Additional arms can be passed through the generator_runs_dict argument.
- Parameters
model – model to draw predictions from.
metric_x – metric to plot on the x-axis.
metric_y – metric to plot on the y-axis.
generator_runs_dict – a mapping from generator run name to generator run.
rel_x – if True, use relative effects on metric_x.
rel_y – if True, use relative effects on metric_y.
data_selector – Function for selecting observations for plotting.
-
ax.plot.scatter.
plot_objective_vs_constraints
(model: ax.modelbridge.base.ModelBridge, objective: str, subset_metrics: Optional[List[str]] = None, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, infer_relative_constraints: Optional[bool] = False, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]¶ Plot the tradeoff between an objetive and all other metrics in a model.
All arms used in the model are included in the plot. Additional arms can be passed through via the generator_runs_dict argument.
Fixed features input can be used to override fields of the insample arms when making model predictions.
- Parameters
model – model to draw predictions from.
objective – metric to optimize. Plotted on the x-axis.
subset_metrics – list of metrics to plot on the y-axes if need a subset of all metrics in the model.
generator_runs_dict – a mapping from generator run name to generator run.
rel – if True, use relative effects. Default is True.
infer_relative_constraints – if True, read relative spec from model’s optimization config. Absolute constraints will not be relativized; relative ones will be. Objectives will respect the rel parameter. Metrics that are not constraints will be relativized.
fixed_features – Fixed features to use when making model predictions.
data_selector – Function for selecting observations for plotting.
-
ax.plot.scatter.
tile_fitted
(model: ax.modelbridge.base.ModelBridge, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, rel: bool = True, show_arm_details_on_hover: bool = False, show_CI: bool = True, arm_noun: str = 'arm', metrics: Optional[List[str]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → ax.plot.base.AxPlotConfig[source]¶ Tile version of fitted outcome plots.
- Parameters
model – model to use for predictions.
generator_runs_dict – a mapping from generator run name to generator run.
rel – if True, use relative effects. Default is True.
show_arm_details_on_hover – if True, display parameterizations of arms on hover. Default is False.
show_CI – if True, render confidence intervals.
arm_noun – noun to use instead of “arm” (e.g. group)
metrics – List of metric names to restrict to when plotting.
fixed_features – Fixed features to use when making model predictions.
data_selector – Function for selecting observations for plotting.
-
ax.plot.scatter.
tile_observations
(experiment: ax.core.experiment.Experiment, data: Optional[ax.core.data.Data] = None, rel: bool = True, metrics: Optional[List[str]] = None, arm_names: Optional[List[str]] = None) → ax.plot.base.AxPlotConfig[source]¶ Tiled plot with all observed outcomes.
Will plot all observed arms. If data is provided will use that, otherwise will fetch data from experiment. Will plot all metrics in data unless a list is provided in metrics. If arm_names is provided will limit the plot to only arms in that list.
- Parameters
experiment – Experiment
data – Data to use, otherwise will fetch data from experiment.
rel – Plot relative values, if experiment has status quo.
metrics – Limit results to this set of metrics.
arm_names – Limit results to this set of arms.
Returns: Plot config for the plot.
Slice Plot¶
-
ax.plot.slice.
interact_slice
(model: ax.modelbridge.base.ModelBridge, param_name: str, metric_name: str = '', generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → ax.plot.base.AxPlotConfig[source]¶ Create interactive plot with predictions for a 1-d slice of the parameter space.
- Parameters
model – ModelBridge that contains model for predictions
param_name – Name of parameter that will be sliced
metric_name – Name of metric to plot
generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.
relative – Predictions relative to status quo
density – Number of points along slice to evaluate predictions.
slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters. Ignored if fixed_features is specified.
fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.
- Returns
interactive plot of objective vs. parameter
- Return type
-
ax.plot.slice.
interact_slice_plotly
(model: ax.modelbridge.base.ModelBridge, param_name: str, metric_name: str = '', generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → plotly.graph_objs._figure.Figure[source]¶ Create interactive plot with predictions for a 1-d slice of the parameter space.
- Parameters
model – ModelBridge that contains model for predictions
param_name – Name of parameter that will be sliced
metric_name – Name of metric to plot
generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.
relative – Predictions relative to status quo
density – Number of points along slice to evaluate predictions.
slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters. Ignored if fixed_features is specified.
fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.
- Returns
interactive plot of objective vs. parameter
- Return type
go.Figure
-
ax.plot.slice.
plot_slice
(model: ax.modelbridge.base.ModelBridge, param_name: str, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → ax.plot.base.AxPlotConfig[source]¶ Plot predictions for a 1-d slice of the parameter space.
- Parameters
model – ModelBridge that contains model for predictions
param_name – Name of parameter that will be sliced
metric_name – Name of metric to plot
generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.
relative – Predictions relative to status quo
density – Number of points along slice to evaluate predictions.
slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters. Ignored if fixed_features is specified.
fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.
- Returns
plot of objective vs. parameter value
- Return type
-
ax.plot.slice.
plot_slice_plotly
(model: ax.modelbridge.base.ModelBridge, param_name: str, metric_name: str, generator_runs_dict: Optional[Dict[str, ax.core.generator_run.GeneratorRun]] = None, relative: bool = False, density: int = 50, slice_values: Optional[Dict[str, Any]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, trial_index: Optional[int] = None) → plotly.graph_objs._figure.Figure[source]¶ Plot predictions for a 1-d slice of the parameter space.
- Parameters
model – ModelBridge that contains model for predictions
param_name – Name of parameter that will be sliced
metric_name – Name of metric to plot
generator_runs_dict – A dictionary {name: generator run} of generator runs whose arms will be plotted, if they lie in the slice.
relative – Predictions relative to status quo
density – Number of points along slice to evaluate predictions.
slice_values – A dictionary {name: val} for the fixed values of the other parameters. If not provided, then the status quo values will be used if there is a status quo, otherwise the mean of numeric parameters or the mode of choice parameters. Ignored if fixed_features is specified.
fixed_features – An ObservationFeatures object containing the values of features (including non-parameter features like context) to be set in the slice.
- Returns
plot of objective vs. parameter value
- Return type
go.Figure
Table¶
-
ax.plot.table_view.
get_color
(x: float, ci: float, rel: bool, reverse: bool)[source]¶ Determine the color of the table cell.
-
ax.plot.table_view.
table_view_plot
(experiment: ax.core.experiment.Experiment, data: ax.core.data.Data, use_empirical_bayes: bool = True, only_data_frame: bool = False, arm_noun: str = 'arm')[source]¶ Table of means and confidence intervals.
Table is of the form:
arm
metric_1
metric_2
0_0
mean +- CI
…
0_1
…
…
Trace Plots¶
-
ax.plot.trace.
get_running_trials_per_minute
(experiment: ax.core.experiment.Experiment, show_until_latest_end_plus_timedelta: datetime.timedelta = datetime.timedelta(seconds=300)) → ax.plot.base.AxPlotConfig[source]¶
-
ax.plot.trace.
mean_markers_scatter
(y: numpy.ndarray, marker_color: Tuple[int] = (190, 186, 218), legend_label: str = '', hover_labels: Optional[List[str]] = None) → plotly.graph_objs._scatter.Scatter[source]¶ Creates a graph object for trace of the mean of the given series across runs, with errorbars.
- Parameters
y – (r x t) array with results from r runs and t trials.
trace_color – tuple of 3 int values representing an RGB color. Defaults to light purple.
legend_label – label for this trace.
hover_labels – optional, text to show on hover; list where the i-th value corresponds to the i-th value in the value of the y argument.
- Returns
plotly graph object
- Return type
go.Scatter
-
ax.plot.trace.
mean_trace_scatter
(y: numpy.ndarray, trace_color: Tuple[int] = (128, 177, 211), legend_label: str = 'mean', hover_labels: Optional[List[str]] = None) → plotly.graph_objs._scatter.Scatter[source]¶ Creates a graph object for trace of the mean of the given series across runs.
- Parameters
y – (r x t) array with results from r runs and t trials.
trace_color – tuple of 3 int values representing an RGB color. Defaults to blue.
legend_label – label for this trace.
hover_labels – optional, text to show on hover; list where the i-th value corresponds to the i-th value in the value of the y argument.
- Returns
plotly graph object
- Return type
go.Scatter
-
ax.plot.trace.
model_transitions_scatter
(model_transitions: List[int], y_range: List[float], generator_change_color: Tuple[int] = (141, 211, 199)) → List[plotly.graph_objs._scatter.Scatter][source]¶ Creates a graph object for the line(s) representing generator changes.
- Parameters
model_transitions – iterations, before which generators changed
y_range – upper and lower values of the y-range of the plot
generator_change_color – tuple of 3 int values representing an RGB color. Defaults to orange.
- Returns
- plotly graph objects for the lines representing generator
changes
- Return type
go.Scatter
-
ax.plot.trace.
optimization_times
(fit_times: Dict[str, List[float]], gen_times: Dict[str, List[float]], title: str = '') → ax.plot.base.AxPlotConfig[source]¶ Plots wall times for each method as a bar chart.
- Parameters
fit_times – A map from method name to a list of the model fitting times.
gen_times – A map from method name to a list of the gen times.
title – Title for this plot.
Returns: AxPlotConfig with the plot
-
ax.plot.trace.
optimization_trace_all_methods
(y_dict: Dict[str, numpy.ndarray], optimum: Optional[float] = None, title: str = '', ylabel: str = '', hover_labels: Optional[List[str]] = None, trace_colors: List[Tuple[int]] = [(128, 177, 211), (251, 128, 114), (141, 211, 199), (188, 128, 189), (190, 186, 218), (253, 180, 98)], optimum_color: Tuple[int] = (253, 180, 98)) → ax.plot.base.AxPlotConfig[source]¶ Plots a comparison of optimization traces with 2-SEM bands for multiple methods on the same problem.
- Parameters
y – a mapping of method names to (r x t) arrays, where r is the number of runs in the test, and t is the number of trials.
optimum – value of the optimal objective.
title – title for this plot.
ylabel – label for the Y-axis.
hover_labels – optional, text to show on hover; list where the i-th value corresponds to the i-th value in the value of the y argument.
trace_colors – tuples of 3 int values representing RGB colors to use for different methods shown in the combination plot. Defaults to Ax discrete color scale.
optimum_color – tuple of 3 int values representing an RGB color. Defaults to orange.
- Returns
plot of the comparison of optimization traces with IQR
- Return type
-
ax.plot.trace.
optimization_trace_single_method
(y: numpy.ndarray, optimum: Optional[float] = None, model_transitions: Optional[List[int]] = None, title: str = '', ylabel: str = '', hover_labels: Optional[List[str]] = None, trace_color: Tuple[int] = (128, 177, 211), optimum_color: Tuple[int] = (253, 180, 98), generator_change_color: Tuple[int] = (141, 211, 199), optimization_direction: Optional[str] = 'passthrough', plot_trial_points: bool = False, trial_points_color: Tuple[int] = (190, 186, 218)) → ax.plot.base.AxPlotConfig[source]¶ Plots an optimization trace with mean and 2 SEMs
- Parameters
y – (r x t) array; result to plot, with r runs and t trials
optimum – value of the optimal objective
model_transitions – iterations, before which generators changed
title – title for this plot.
ylabel – label for the Y-axis.
hover_labels – optional, text to show on hover; list where the i-th value corresponds to the i-th value in the value of the y argument.
trace_color – tuple of 3 int values representing an RGB color for plotting running optimum. Defaults to blue.
optimum_color – tuple of 3 int values representing an RGB color. Defaults to orange.
generator_change_color – tuple of 3 int values representing an RGB color. Defaults to teal.
optimization_direction – str, “minimize” will plot running minimum, “maximize” will plot running maximum, “passthrough” (default) will plot y as lines, None does not plot running optimum)
plot_trial_points – bool, whether to plot the objective for each trial, as supplied in y (default False for backward compatibility)
trial_points_color – tuple of 3 int values representing an RGB color for plotting trial points. Defaults to light purple.
- Returns
plot of the optimization trace with IQR
- Return type
-
ax.plot.trace.
optimization_trace_single_method_plotly
(y: numpy.ndarray, optimum: Optional[float] = None, model_transitions: Optional[List[int]] = None, title: str = '', ylabel: str = '', hover_labels: Optional[List[str]] = None, trace_color: Tuple[int] = (128, 177, 211), optimum_color: Tuple[int] = (253, 180, 98), generator_change_color: Tuple[int] = (141, 211, 199), optimization_direction: Optional[str] = 'passthrough', plot_trial_points: bool = False, trial_points_color: Tuple[int] = (190, 186, 218)) → plotly.graph_objs._figure.Figure[source]¶ Plots an optimization trace with mean and 2 SEMs
- Parameters
y – (r x t) array; result to plot, with r runs and t trials
optimum – value of the optimal objective
model_transitions – iterations, before which generators changed
title – title for this plot.
ylabel – label for the Y-axis.
hover_labels – optional, text to show on hover; list where the i-th value corresponds to the i-th value in the value of the y argument.
trace_color – tuple of 3 int values representing an RGB color for plotting running optimum. Defaults to blue.
optimum_color – tuple of 3 int values representing an RGB color. Defaults to orange.
generator_change_color – tuple of 3 int values representing an RGB color. Defaults to teal.
optimization_direction – str, “minimize” will plot running minimum, “maximize” will plot running maximum, “passthrough” (default) will plot y as lines, None does not plot running optimum)
plot_trial_points – bool, whether to plot the objective for each trial, as supplied in y (default False for backward compatibility)
trial_points_color – tuple of 3 int values representing an RGB color for plotting trial points. Defaults to light purple.
- Returns
plot of the optimization trace with IQR
- Return type
go.Figure
-
ax.plot.trace.
optimum_objective_scatter
(optimum: float, num_iterations: int, optimum_color: Tuple[int] = (253, 180, 98)) → plotly.graph_objs._scatter.Scatter[source]¶ Creates a graph object for the line representing optimal objective.
- Parameters
optimum – value of the optimal objective
num_iterations – how many trials were in the optimization (used to determine the width of the plot)
trace_color – tuple of 3 int values representing an RGB color. Defaults to orange.
- Returns
plotly graph objects for the optimal objective line
- Return type
go.Scatter
-
ax.plot.trace.
sem_range_scatter
(y: numpy.ndarray, trace_color: Tuple[int] = (128, 177, 211), legend_label: str = '') → Tuple[plotly.graph_objs._scatter.Scatter, plotly.graph_objs._scatter.Scatter][source]¶ Creates a graph object for trace of mean +/- 2 SEMs for y, across runs.
- Parameters
y – (r x t) array with results from r runs and t trials.
trace_color – tuple of 3 int values representing an RGB color. Defaults to blue.
legend_label – Label for the legend group.
- Returns
plotly graph objects for lower and upper bounds
- Return type
Tuple[go.Scatter]
Plotting Utilities¶
-
class
ax.plot.color.
COLORS
(value)[source]¶ Bases:
enum.Enum
An enumeration.
-
CORAL
= (251, 128, 114)¶
-
LIGHT_PURPLE
= (190, 186, 218)¶
-
ORANGE
= (253, 180, 98)¶
-
PINK
= (188, 128, 189)¶
-
STEELBLUE
= (128, 177, 211)¶
-
TEAL
= (141, 211, 199)¶
-
-
ax.plot.color.
plotly_color_scale
(list_of_rgb_tuples: List[Tuple[float]], reverse: bool = False, alpha: float = 1) → List[Tuple[float, str]][source]¶ Convert list of RGB tuples to list of tuples, where each tuple is break in [0, 1] and stringified RGBA color.
-
ax.plot.color.
rgba
(rgb_tuple: Tuple[float], alpha: float = 1) → str[source]¶ Convert RGB tuple to an RGBA string.
-
ax.plot.helper.
build_filter_trial
(keep_trial_indices: List[int]) → Callable[[ax.core.observation.Observation], bool][source]¶ Creates a callable that filters observations based on trial_index
-
ax.plot.helper.
extend_range
(lower: float, upper: float, percent: int = 10, log_scale: bool = False) → Tuple[float, float][source]¶ Given a range of minimum and maximum values taken by values on a given axis, extend it in both directions by a given percentage to have some margin within the plot around its meaningful part.
-
ax.plot.helper.
get_fixed_values
(model: ax.modelbridge.base.ModelBridge, slice_values: Optional[Dict[str, Any]] = None, trial_index: Optional[int] = None) → Dict[str, Optional[Union[str, bool, float, int]]][source]¶ Get fixed values for parameters in a slice plot.
If there is an in-design status quo, those values will be used. Otherwise, the mean of RangeParameters or the mode of ChoiceParameters is used.
Any value in slice_values will override the above.
- Parameters
model – ModelBridge being used for plotting
slice_values – Map from parameter name to value at which is should be fixed.
Returns: Map from parameter name to fixed value.
-
ax.plot.helper.
get_grid_for_parameter
(parameter: ax.core.parameter.RangeParameter, density: int) → numpy.ndarray[source]¶ Get a grid of points along the range of the parameter.
Will be a log-scale grid if parameter is log scale.
- Parameters
parameter – Parameter for which to generate grid.
density – Number of points in the grid.
-
ax.plot.helper.
get_plot_data
(model: ax.modelbridge.base.ModelBridge, generator_runs_dict: Dict[str, ax.core.generator_run.GeneratorRun], metric_names: Optional[Set[str]] = None, fixed_features: Optional[ax.core.observation.ObservationFeatures] = None, data_selector: Optional[Callable[[ax.core.observation.Observation], bool]] = None) → Tuple[ax.plot.base.PlotData, List[Dict[str, Union[str, float]]], Dict[str, Dict[str, Optional[Union[str, bool, float, int]]]]][source]¶ Format data object with metrics for in-sample and out-of-sample arms.
Calculate both observed and predicted metrics for in-sample arms. Calculate predicted metrics for out-of-sample arms passed via the generator_runs_dict argument.
In PlotData, in-sample observations are merged with IVW. In RawData, they are left un-merged and given as a list of dictionaries, one for each observation and having keys ‘arm_name’, ‘mean’, and ‘sem’.
- Parameters
model – The model.
generator_runs_dict – a mapping from generator run name to generator run.
metric_names – Restrict predictions to this set. If None, all metrics in the model will be returned.
fixed_features – Fixed features to use when making model predictions.
data_selector – Function for selecting observations for plotting.
- Returns
A tuple containing
PlotData object with in-sample and out-of-sample predictions.
List of observations like:
{'metric_name': 'likes', 'arm_name': '0_1', 'mean': 1., 'sem': 0.1}.
Mapping from arm name to parameters.
-
ax.plot.helper.
get_range_parameter
(model: ax.modelbridge.base.ModelBridge, param_name: str) → ax.core.parameter.RangeParameter[source]¶ Get the range parameter with the given name from the model.
Throws if parameter doesn’t exist or is not a range parameter.
- Parameters
model – The model.
param_name – The name of the RangeParameter to be found.
Returns: The RangeParameter named param_name.
-
ax.plot.helper.
get_range_parameters
(model: ax.modelbridge.base.ModelBridge) → List[ax.core.parameter.RangeParameter][source]¶ Get a list of range parameters from a model.
- Parameters
model – The model.
Returns: List of RangeParameters.
-
ax.plot.helper.
infer_is_relative
(model: ax.modelbridge.base.ModelBridge, metrics: List[str], non_constraint_rel: bool) → Dict[str, bool][source]¶ Determine whether or not to relativize a metric.
Metrics that are constraints will get this decision from their relative flag. Other metrics will use the default_rel.
- Parameters
model – model fit on metrics.
metrics – list of metric names.
non_constraint_rel – whether or not to relativize non-constraint metrics
- Returns
Dict[str, bool] containing whether or not to relativize each input metric.
-
ax.plot.helper.
relativize
(m_t: float, sem_t: float, m_c: float, sem_c: float) → List[float][source]¶