Glossary
Arm
Mapping from Parameter to value. This provides the configuration to test in an Ax trial. Also known as "treatment group". [Arm]
Bandit optimization
Machine learning-driven version of A/B testing that dynamically allocates traffic to arms that are performing well, in search of the best arm among a given set.
Bayesian optimization
Sequential optimization strategy for finding an optimal arm in a continuous search space.
Experiment
Object that keeps track of the whole optimization process. Contains a search space, optimization config, and other metadata. [Experiment]
Generator run
Outcome of a single run of the gen
method of a model bridge, contains the generated arms, as well as possibly best arm predictions, other model predictions, fit times etc. [GeneratorRun]
Metric
Interface for fetching data for a specific measurement on an experiment or trial. [Metric]
Model
Algorithm that can be used to generate new points in a search space. [Model]
Model bridge
Adapter for interactions with a model within the Ax ecosystem. [ModelBridge]
Objective
The metric to be optimized, with an optimization direction (maximize/minimize). [Objective]
Optimization config
Contains information necessary to run an optimization, i.e. objective and outcome constraints. [OptimizationConfig]
Outcome constraint
Constraint on metric values, can be an order constraint or a sum constraint; violating arms will be considered infeasible. [OutcomeConstraint]
Parameter
Configurable quantity that can be assigned one of multiple possible values, can be continuous (RangeParameter
), discrete (ChoiceParameter
) or fixed (FixedParameter
). [Parameter]
Parameter constraint
Places restrictions on the relationships between parameters. For example buffer_size1 < buffer_size2
or buffer_size_1 + buffer_size_2 < 1024
. [ParameterConstraint]
Relative outcome constraint
Outcome constraint evaluated relative to the status quo instead of directly on the metric value. [OutcomeConstraint]
Runner
Dispatch abstraction that defines how a given trial is to be run (either locally or by dispatching to an external system). [Runner]
Search space
Continuous, discrete or mixed design space that defines the set of potential arms that can be evaluated during the optimization. [SearchSpace]
SEM
Standard error of the metric's mean, 0.0 for noiseless measurements.
Simple experiment
Subclass of experiment that assumes synchronous evaluation (uses an evaluation function to get data for trials right after they are suggested). Abstracts away certain details, and allows for faster instantiation. [SimpleExperiment]
Status quo
An arm, usually the currently deployed configuration, which provides a baseline for comparing all other arms. Also known as a control arm. [StatusQuo]
Trial
Single step in the experiment, contains a single arm. In cases where the trial contains multiple arms that are deployed simultaneously, we refer to it as a batch trial. [Trial]
, [BatchTrial]