ax.metrics

Branin

class ax.metrics.branin.AugmentedBraninMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.branin.BraninMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.branin.NegativeBraninMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.branin.BraninMetric

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

Branin Map

class ax.metrics.branin_map.BraninFidelityMapMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None, cache_evaluations: bool = True)[source]

Bases: ax.metrics.noisy_function_map.NoisyFunctionMapMetric

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, noisy: bool = True, **kwargs: Any)ax.core.map_data.MapData[source]

Fetch data for one trial.

class ax.metrics.branin_map.BraninIncrementalTimestampMapMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None, rate: Optional[float] = None)[source]

Bases: ax.metrics.branin_map.BraninTimestampMapMetric

classmethod combine_with_last_data()bool[source]

Indicates whether, when attaching data, we should merge the new dataframe into the most recently attached dataframe.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, noisy: bool = True, **kwargs: Any)ax.core.map_data.MapData[source]

Fetch data for one trial.

classmethod overwrite_existing_data()bool[source]

Indicates whether, when attaching data, we should overwrite all previously attached data with the new dataframe.

class ax.metrics.branin_map.BraninTimestampMapMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None, rate: Optional[float] = None)[source]

Bases: ax.metrics.noisy_function_map.NoisyFunctionMapMetric

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, noisy: bool = True, **kwargs: Any)ax.core.map_data.MapData[source]

Fetch data for one trial.

ax.metrics.branin_map.random()x in the interval [0, 1).

Chemistry

Classes for optimizing yields from chemical reactions.

References

Perera2018

D. Perera, J. W. Tucker, S. Brahmbhatt, C. Helal, A. Chong, W. Farrell, P. Richardson, N. W. Sach. A platform for automated nanomole-scale reaction screening and micromole-scale synthesis in flow. Science, 26. 2018.

Shields2021

B. J. Shields, J. Stevens, J. Li, et al. Bayesian reaction optimization as a tool for chemical synthesis. Nature 590, 89–96 (2021).

“SUZUKI” involves optimization solvent, ligand, and base combinations in a Suzuki-Miyaura coupling to optimize carbon-carbon bond formation. See _[Perera2018] for details.

“DIRECT_ARYLATION” involves optimizing the solvent, base, and ligand chemicals as well as the temperature and concentration for a direct arylation reaction. See _[Shields2021] for details.

class ax.metrics.chemistry.ChemistryData(param_names: ‘List[str]’, objective_dict: ‘Dict[Tuple[TParamValue, …], float]’)[source]

Bases: object

evaluate(params: Dict[str, Optional[Union[str, bool, float, int]]])float[source]
objective_dict: Dict[Tuple[Optional[Union[str, bool, float, int]], ], float]
param_names: List[str]
class ax.metrics.chemistry.ChemistryMetric(name: str, noiseless: bool = False, problem_type: ax.metrics.chemistry.ChemistryProblemType = <ChemistryProblemType.SUZUKI: 'suzuki'>)[source]

Bases: ax.core.metric.Metric

clone()ax.metrics.chemistry.ChemistryMetric[source]

Create a copy of this Metric.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, **kwargs: Any)ax.core.data.Data[source]

Fetch data for one trial.

class ax.metrics.chemistry.ChemistryProblemType(value)[source]

Bases: enum.Enum

An enumeration.

DIRECT_ARYLATION: str = 'direct_arylation'
SUZUKI: str = 'suzuki'

Factorial

class ax.metrics.factorial.FactorialMetric(name: str, coefficients: Dict[str, Dict[Optional[Union[str, bool, float, int]], float]], batch_size: int = 10000, noise_var: float = 0.0)[source]

Bases: ax.core.metric.Metric

Metric for testing factorial designs assuming a main effects only logit model.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, **kwargs: Any)ax.core.data.Data[source]

Fetch data for one trial.

classmethod is_available_while_running()bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

ax.metrics.factorial.evaluation_function(parameterization: Dict[str, Optional[Union[str, bool, float, int]]], coefficients: Dict[str, Dict[Optional[Union[str, bool, float, int]], float]], weight: float = 1.0, batch_size: int = 10000, noise_var: float = 0.0)Tuple[float, float][source]

Hartmann6

class ax.metrics.hartmann6.AugmentedHartmann6Metric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.hartmann6.Hartmann6Metric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

L2 Norm

class ax.metrics.l2norm.L2NormMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

Noisy Functions

class ax.metrics.noisy_function.GenericNoisyFunctionMetric(name: str, f: Callable[[Dict[str, Optional[Union[str, bool, float, int]]]], float], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

clone()ax.metrics.noisy_function.GenericNoisyFunctionMetric[source]

Create a copy of this Metric.

property param_names
class ax.metrics.noisy_function.NoisyFunctionMetric(name: str, param_names: List[str], noise_sd: Optional[float] = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.core.metric.Metric

A metric defined by a generic deterministic function, with normal noise with mean 0 and mean_sd scale added to the result.

clone()ax.metrics.noisy_function.NoisyFunctionMetric[source]

Create a copy of this Metric.

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, noisy: bool = True, **kwargs: Any)ax.core.data.Data[source]

Fetch data for one trial.

classmethod is_available_while_running()bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

Noisy Function Map

class ax.metrics.noisy_function_map.NoisyFunctionMapMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None, cache_evaluations: bool = True)[source]

Bases: ax.core.map_metric.MapMetric

A metric defined by a generic deterministic function, with normal noise with mean 0 and mean_sd scale added to the result.

clone()ax.metrics.noisy_function_map.NoisyFunctionMapMetric[source]

Create a copy of this Metric.

f(x: numpy.ndarray)float[source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, noisy: bool = True, **kwargs: Any)ax.core.map_data.MapData[source]

Fetch data for one trial.

classmethod is_available_while_running()bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

classmethod overwrite_existing_data()bool[source]

Indicates whether, when attaching data, we should overwrite all previously attached data with the new dataframe.

Sklearn

class ax.metrics.sklearn.SklearnDataset(value)[source]

Bases: enum.Enum

An enumeration.

BOSTON: str = 'boston'
CANCER: str = 'cancer'
DIGITS: str = 'digits'
class ax.metrics.sklearn.SklearnMetric(name: str, lower_is_better: bool = False, model_type: ax.metrics.sklearn.SklearnModelType = <SklearnModelType.RF: 'rf'>, dataset: ax.metrics.sklearn.SklearnDataset = <SklearnDataset.DIGITS: 'digits'>, observed_noise: bool = False, num_folds: int = 5)[source]

Bases: ax.core.metric.Metric

A metric that trains and evaluates an sklearn model.

The evaluation metric is the k-fold “score”. The scoring function depends on the model type and task type (e.g. classification/regression), but higher scores are better.

See sklearn documentation for supported parameters.

In addition, this metric supports tuning the hidden_layer_size and the number of hidden layers (num_hidden_layers) of a NN model.

clone()ax.metrics.sklearn.SklearnMetric[source]

Create a copy of this Metric.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, noisy: bool = True, **kwargs: Any)ax.core.data.Data[source]

Fetch data for one trial.

train_eval(arm: ax.core.arm.Arm)Tuple[float, float][source]

Train and evaluate model.

Parameters

arm – An arm specifying the parameters to evaluate.

Returns

  • The average k-fold CV score

  • The SE of the mean k-fold CV score if observed_noise is True

    and ‘nan’ otherwise

Return type

A two-element tuple containing

class ax.metrics.sklearn.SklearnModelType(value)[source]

Bases: enum.Enum

An enumeration.

NN: str = 'nn'
RF: str = 'rf'