Glossary
Arm
Mapping from parameters (i.e. a parameterization or parameter configuration) to parameter values. An arm provides the configuration to be tested in an Ax trial. Also known as "treatment group" or "parameterization", the name 'arm' comes from the Multi-Armed Bandit optimization problem, in which a player facing a row of “one-armed bandit” slot machines has to choose which machines to play when and in what order. [Arm]
Bandit optimization
Machine learning-driven version of A/B testing that dynamically allocates traffic to arms which are performing well, to determine the best arm among a given set.
Bayesian optimization
Sequential optimization strategy for finding an optimal arm in a continuous search space.
Evaluation function
Function that takes a parameterization and an optional weight as input and outputs a set of metric evaluations (more details). Used in simple experiment and in the Loop API.
Experiment
Object that keeps track of the whole optimization process. Contains a search space, optimization config, and other metadata. [Experiment]
Generator run
Outcome of a single run of the gen
method of a model bridge, contains the generated arms, as well as possibly best arm predictions, other model predictions, fit times etc. [GeneratorRun]
Metric
Interface for fetching data for a specific measurement on an experiment or trial. [Metric]
Model
Algorithm that can be used to generate new points in a search space. [Model]
Model bridge
Adapter for interactions with a model within the Ax ecosystem. [ModelBridge]
Objective
The metric to be optimized, with an optimization direction (maximize/minimize). [Objective]
Optimization config
Contains information necessary to run an optimization, i.e. objective and outcome constraints. [OptimizationConfig]
Outcome constraint
Constraint on metric values, can be an order constraint or a sum constraint; violating arms will be considered infeasible. [OutcomeConstraint]
Parameter
Configurable quantity that can be assigned one of multiple possible values, can be continuous (RangeParameter
), discrete (ChoiceParameter
) or fixed (FixedParameter
). [Parameter]
Parameter constraint
Places restrictions on the relationships between parameters. For example buffer_size1 < buffer_size2
or buffer_size_1 + buffer_size_2 < 1024
. [ParameterConstraint]
Relative outcome constraint
Outcome constraint evaluated relative to the status quo instead of directly on the metric value. [OutcomeConstraint]
Runner
Dispatch abstraction that defines how a given trial is to be run (either locally or by dispatching to an external system). [Runner]
Search space
Continuous, discrete or mixed design space that defines the set of parameters to be tuned in the optimization, and optionally parameter constraints on these parameters. The parameters of the arms to be evaluated in the optimization are drawn from a search space. [SearchSpace]
SEM
Standard error of the metric's mean, 0.0 for noiseless measurements.
Simple experiment
Subclass of experiment that assumes synchronous evaluation (uses an evaluation function to get data for trials right after they are suggested). Abstracts away certain details, and allows for faster instantiation. [SimpleExperiment]
Status quo
An arm, usually the currently deployed configuration, which provides a baseline for comparing all other arms. Also known as a control arm. [StatusQuo]
Trial
Single step in the experiment, contains a single arm. In cases where the trial contains multiple arms that are deployed simultaneously, we refer to it as a batch trial. [Trial]
, [BatchTrial]