ax.metrics

BoTorch Test Problem

class ax.metrics.botorch_test_problem.BotorchTestProblemMetric(name: str, noise_sd: Optional[float] = None, index: int = 0)[source]

Bases: Metric

A Metric for retriving information from a BotorchTestProblemRunner. A BotorchTestProblemRunner will attach the result of a call to BaseTestProblem.forward per Arm on a given trial, and this Metric will extract the proper value from the resulting tensor given its index.

fetch_trial_data(trial: BaseTrial, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.

Branin

class ax.metrics.branin.AugmentedBraninMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: NoisyFunctionMetric

f(x: ndarray) float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.branin.BraninMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: NoisyFunctionMetric

f(x: ndarray) float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.branin.NegativeBraninMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: BraninMetric

f(x: ndarray) float[source]

The deterministic function that produces the metric outcomes.

Branin Map

class ax.metrics.branin_map.BraninFidelityMapMetric(name: str, param_names: Iterable[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: NoisyFunctionMapMetric

f(x: ndarray) Mapping[str, Any][source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: BaseTrial, noisy: bool = True, **kwargs: Any) Result[MapData, MetricFetchE][source]

Fetch data for one trial.

map_key_info: MapKeyInfo[float] = <ax.core.map_data.MapKeyInfo object>
class ax.metrics.branin_map.BraninTimestampMapMetric(name: str, param_names: Iterable[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None, rate: Optional[float] = None, cache_evaluations: bool = True)[source]

Bases: NoisyFunctionMapMetric

f(x: ndarray, timestamp: int) Mapping[str, Any][source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: BaseTrial, noisy: bool = True, **kwargs: Any) Result[MapData, MetricFetchE][source]

Fetch data for one trial.

ax.metrics.branin_map.random() x in the interval [0, 1).

Chemistry

Classes for optimizing yields from chemical reactions.

References

[Perera2018]

D. Perera, J. W. Tucker, S. Brahmbhatt, C. Helal, A. Chong, W. Farrell, P. Richardson, N. W. Sach. A platform for automated nanomole-scale reaction screening and micromole-scale synthesis in flow. Science, 26. 2018.

[Shields2021]

B. J. Shields, J. Stevens, J. Li, et al. Bayesian reaction optimization as a tool for chemical synthesis. Nature 590, 89–96 (2021).

“SUZUKI” involves optimization solvent, ligand, and base combinations in a Suzuki-Miyaura coupling to optimize carbon-carbon bond formation. See _[Perera2018] for details.

“DIRECT_ARYLATION” involves optimizing the solvent, base, and ligand chemicals as well as the temperature and concentration for a direct arylation reaction. See _[Shields2021] for details.

class ax.metrics.chemistry.ChemistryData(param_names: 'List[str]', objective_dict: 'Dict[Tuple[TParamValue, ...], float]')[source]

Bases: object

evaluate(params: Dict[str, Union[None, str, bool, float, int]]) float[source]
objective_dict: Dict[Tuple[Union[None, str, bool, float, int], ...], float]
param_names: List[str]
class ax.metrics.chemistry.ChemistryMetric(name: str, noiseless: bool = False, problem_type: ChemistryProblemType = ChemistryProblemType.SUZUKI, lower_is_better: bool = False)[source]

Bases: Metric

Metric for modeling chemical reactions.

Metric describing the outcomes of chemical reactions. Based on tabulate data. Problems typically contain many discrete and categorical parameters.

Parameters:
  • name – The name of the metric.

  • noiseless – If True, consider observations noiseless, otherwise

  • noise. (sume unknown Gaussian observation) –

  • problem_type – The problem type.

noiseless

If True, consider observations noiseless, otherwise assume unknown Gaussian observation noise.

lower_is_better

If True, the metric should be minimized.

clone() ChemistryMetric[source]

Create a copy of this Metric.

fetch_trial_data(trial: BaseTrial, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.

class ax.metrics.chemistry.ChemistryProblemType(value)[source]

Bases: Enum

An enumeration.

DIRECT_ARYLATION: str = 'direct_arylation'
SUZUKI: str = 'suzuki'

Curve

Metrics that allow to retrieve curves of partial results. Typically used to retrieve partial learning curves of ML training jobs.

class ax.metrics.curve.AbstractCurveMetric(name: str, curve_name: str, lower_is_better: bool = True, cumulative_best: bool = False, smoothing_window: Optional[int] = None)[source]

Bases: MapMetric, ABC

Metric representing (partial) learning curves of ML model training jobs.

bulk_fetch_experiment_data(experiment: Experiment, metrics: Iterable[Metric], trials: Optional[Iterable[BaseTrial]] = None, **kwargs: Any) Dict[int, Dict[str, Result[Data, MetricFetchE]]][source]

Fetch multiple metrics data for an experiment.

bulk_fetch_trial_data(trial: BaseTrial, metrics: Iterable[Metric], **kwargs: Any) Dict[str, Result[Data, MetricFetchE]][source]

Fetch multiple metrics data for one trial.

property curve_names: Set[str]
fetch_trial_data(trial: BaseTrial, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.

abstract get_curves_from_ids(ids: Iterable[Union[int, str]], names: Optional[Set[str]] = None) Dict[Union[int, str], Dict[str, pandas.Series]][source]

Get partial result curves from backend ids.

Parameters:
  • ids – The ids of the backend runs for which to fetch the partial result curves.

  • names – The names of the curves to fetch (for each of the runs). If omitted, fetch data for all available curves (this may be slow).

Returns:

A dictionary mapping the backend id to the partial result curves, each of which is represented as a mapping from the metric name to a pandas Series indexed by the progression (which will be mapped to the map_key_info.key of the metric class). E.g. if curve_name=loss and map_key_info.key = training_rows, then a Series should look like:

training_rows (index) | loss

———————–|——

100 | 0.5 200 | 0.2

get_df_from_curve_series(experiment: Experiment, all_curve_series: Dict[Union[int, str], Dict[str, pandas.Series]], metrics: Iterable[Metric], trial_idx_to_id: Dict[int, Union[int, str]]) Optional[pandas.DataFrame][source]
abstract get_ids_from_trials(trials: Iterable[BaseTrial]) Dict[int, Union[int, str]][source]

Get backend run ids associated with trials.

Parameters:

trials – The trials for which to retrieve the associated ids that can be used to to identify the corresponding runs on the backend.

Returns:

A dictionary mapping the trial indices to the identifiers (ints or strings) corresponding to the backend runs associated with the trials. Trials whose corresponding ids could not be found should be omitted.

classmethod is_available_while_running() bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

map_key_info: MapKeyInfo[float] = <ax.core.map_data.MapKeyInfo object>
class ax.metrics.curve.AbstractScalarizedCurveMetric(name: str, coefficients: Dict[str, float], offset: float = 0.0, lower_is_better: bool = True, cumulative_best: bool = False, smoothing_window: Optional[int] = None)[source]

Bases: AbstractCurveMetric

A linear scalarization of (partial) learning curves of ML model training jobs:

scalarized_curve = offset + sum_i(coefficients[i] * curve[i]).

It is assumed that the output of get_curves_from_ids contains all of the curves necessary for performing the scalarization.

property curve_names: Set[str]
get_df_from_curve_series(experiment: Experiment, all_curve_series: Dict[Union[int, str], Dict[str, pandas.Series]], metrics: Iterable[Metric], trial_idx_to_id: Dict[int, Union[int, str]]) Optional[pandas.DataFrame][source]
ax.metrics.curve.get_df_from_curve_series(experiment: Experiment, all_curve_series: Dict[Union[int, str], Dict[str, pandas.Series]], metrics: Iterable[Metric], trial_idx_to_id: Dict[int, Union[int, str]], map_key: str) Optional[pandas.DataFrame][source]

Convert a all_curve_series dict (from get_curves_from_ids) into a dataframe. For each metric, we get one curve (of name curve_name).

Parameters:
  • experiment – The experiment.

  • all_curve_series – A dict containing curve data, as output from get_curves_from_ids.

  • metrics – The metrics from which data is being fetched.

  • trial_idx_to_id – A dict mapping trial index to ids.

Returns:

A dataframe containing curve data or None if no curve data could be found.

ax.metrics.curve.get_df_from_scalarized_curve_series(experiment: Experiment, all_curve_series: Dict[Union[int, str], Dict[str, pandas.Series]], metrics: Iterable[Metric], trial_idx_to_id: Dict[int, Union[int, str]], map_key: str) Optional[pandas.DataFrame][source]

Convert a all_curve_series dict (from get_curves_from_ids) into a dataframe. For each metric, we first get all curves represented in coefficients and then perform scalarization.

Parameters:
  • experiment – The experiment.

  • all_curve_series – A dict containing curve data, as output from get_curves_from_ids.

  • metrics – The metrics from which data is being fetched.

  • trial_idx_to_id – A dict mapping trial index to ids.

  • map_key – The progression key of the metric’s MapKeyInfo.

Returns:

A dataframe containing curve data or None if no curve data could be found.

Dictionary Lookup

class ax.metrics.dict_lookup.DictLookupMetric(name: str, param_names: List[str], lookup_dict: Dict[Tuple[Union[str, float, int, bool], ...], float], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: Metric

A metric defined by a dictionary mapping parameter values to the corresponding metric values.

This provides an option to add normal noise with mean 0 and mean_sd scale to the given metric values.

clone() DictLookupMetric[source]

Create a copy of this Metric.

fetch_trial_data(trial: BaseTrial, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.

classmethod is_available_while_running() bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

Factorial

class ax.metrics.factorial.FactorialMetric(name: str, coefficients: Dict[str, Dict[Union[None, str, bool, float, int], float]], batch_size: int = 10000, noise_var: float = 0.0)[source]

Bases: Metric

Metric for testing factorial designs assuming a main effects only logit model.

fetch_trial_data(trial: BaseTrial, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.

classmethod is_available_while_running() bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

ax.metrics.factorial.evaluation_function(parameterization: Dict[str, Union[None, str, bool, float, int]], coefficients: Dict[str, Dict[Union[None, str, bool, float, int], float]], weight: float = 1.0, batch_size: int = 10000, noise_var: float = 0.0) Tuple[float, float][source]

Hartmann6

class ax.metrics.hartmann6.AugmentedHartmann6Metric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: NoisyFunctionMetric

f(x: ndarray) float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.hartmann6.Hartmann6Metric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: NoisyFunctionMetric

f(x: ndarray) float[source]

The deterministic function that produces the metric outcomes.

Jenatton

L2 Norm

class ax.metrics.l2norm.L2NormMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: NoisyFunctionMetric

f(x: ndarray) float[source]

The deterministic function that produces the metric outcomes.

Noisy Functions

class ax.metrics.noisy_function.GenericNoisyFunctionMetric(name: str, f: Callable[[Dict[str, Union[None, str, bool, float, int]]], float], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: NoisyFunctionMetric

clone() GenericNoisyFunctionMetric[source]

Create a copy of this Metric.

property param_names: List[str]
class ax.metrics.noisy_function.NoisyFunctionMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: Metric

A metric defined by a generic deterministic function, with normal noise with mean 0 and mean_sd scale added to the result.

clone() NoisyFunctionMetric[source]

Create a copy of this Metric.

f(x: ndarray) float[source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: BaseTrial, noisy: bool = True, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.

classmethod is_available_while_running() bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

Noisy Function Map

class ax.metrics.noisy_function_map.NoisyFunctionMapMetric(name: str, param_names: Iterable[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None, cache_evaluations: bool = True)[source]

Bases: MapMetric

A metric defined by a generic deterministic function, with normal noise with mean 0 and mean_sd scale added to the result.

clone() NoisyFunctionMapMetric[source]

Create a copy of this Metric.

f(x: ndarray) Mapping[str, Any][source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: BaseTrial, noisy: bool = True, **kwargs: Any) Result[MapData, MetricFetchE][source]

Fetch data for one trial.

classmethod is_available_while_running() bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

map_key_info: MapKeyInfo[float] = <ax.core.map_data.MapKeyInfo object>
classmethod overwrite_existing_data() bool[source]

Sklearn

class ax.metrics.sklearn.SklearnDataset(value)[source]

Bases: Enum

An enumeration.

BOSTON: str = 'boston'
CANCER: str = 'cancer'
DIGITS: str = 'digits'
class ax.metrics.sklearn.SklearnMetric(name: str, lower_is_better: bool = False, model_type: SklearnModelType = SklearnModelType.RF, dataset: SklearnDataset = SklearnDataset.DIGITS, observed_noise: bool = False, num_folds: int = 5)[source]

Bases: Metric

A metric that trains and evaluates an sklearn model.

The evaluation metric is the k-fold “score”. The scoring function depends on the model type and task type (e.g. classification/regression), but higher scores are better.

See sklearn documentation for supported parameters.

In addition, this metric supports tuning the hidden_layer_size and the number of hidden layers (num_hidden_layers) of a NN model.

clone() SklearnMetric[source]

Create a copy of this Metric.

fetch_trial_data(trial: BaseTrial, noisy: bool = True, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.

train_eval(arm: Arm) Tuple[float, float][source]

Train and evaluate model.

Parameters:

arm – An arm specifying the parameters to evaluate.

Returns:

  • The average k-fold CV score

  • The SE of the mean k-fold CV score if observed_noise is True

    and ‘nan’ otherwise

Return type:

A two-element tuple containing

class ax.metrics.sklearn.SklearnModelType(value)[source]

Bases: Enum

An enumeration.

NN: str = 'nn'
RF: str = 'rf'

Tensorboard

class ax.metrics.tensorboard.TensorboardCurveMetric(name: str, curve_name: str, lower_is_better: bool = True, cumulative_best: bool = False, smoothing_window: Optional[int] = None)[source]

Bases: AbstractCurveMetric

A CurveMetric for getting Tensorboard curves.

get_curves_from_ids(ids: Iterable[Union[int, str]], names: Optional[Set[str]] = None) Dict[Union[int, str], Dict[str, pandas.Series]][source]

Get curve data from tensorboard logs.

NOTE: If the ids are not simple paths/posix locations, subclass this metric and replace this method with an appropriate one that retrieves the log results.

Parameters:
  • ids – A list of string paths to tensorboard log directories.

  • names – The names of the tags for which to fetch the curves. If omitted, all tags are returned.

Returns:

A nested dictionary mapping ids (first level) and metric names (second level) to pandas Series of data.

map_key_info: MapKeyInfo[float] = <ax.core.map_data.MapKeyInfo object>
class ax.metrics.tensorboard.TensorboardMetric(name: str, tag: str, lower_is_better: Optional[bool] = True, smoothing: float = 0.6, cumulative_best: bool = False)[source]

Bases: MapMetric

A new MapMetric for getting Tensorboard metrics.

bulk_fetch_trial_data(trial: BaseTrial, metrics: List[Metric], **kwargs: Any) Dict[str, Result[Data, MetricFetchE]][source]

Fetch multiple metrics data for one trial, using instance attributes of the metrics.

Returns Dict of metric_name => Result Default behavior calls fetch_trial_data for each metric. Subclasses should override this to perform trial data computation for multiple metrics.

fetch_trial_data(trial: BaseTrial, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.

classmethod is_available_while_running() bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

map_key_info: MapKeyInfo[float] = <ax.core.map_data.MapKeyInfo object>
ax.metrics.tensorboard.get_tb_from_posix(path: str, tags: Optional[Set[str]] = None) Dict[str, pandas.Series][source]

Get Tensorboard data from a posix path.

Parameters:
  • path – The posix path for the directory that contains the tensorboard logs.

  • tags – The names of the tags for which to fetch the curves. If omitted, all tags are returned.

Returns:

A dictionary mapping tag names to pandas Series of data.

TorchX

class ax.metrics.torchx.TorchXMetric(name: str, lower_is_better: Optional[bool] = None, properties: Optional[Dict[str, Any]] = None)[source]

Bases: Metric

Fetches AppMetric (the observation returned by the trial job/app) via the torchx.tracking module. Assumes that the app used the tracker in the following manner:


tracker = torchx.runtime.tracking.FsspecResultTracker(tracker_base) tracker[str(trial_index)] = {metric_name: value}

# – or – tracker[str(trial_index)] = {“metric_name/mean”: mean_value,

“metric_name/sem”: sem_value}

fetch_trial_data(trial: BaseTrial, **kwargs: Any) Result[Data, MetricFetchE][source]

Fetch data for one trial.