ax.metrics

Branin

class ax.metrics.branin.AugmentedBraninMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray) → float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.branin.BraninMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray) → float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.branin.NegativeBraninMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.branin.BraninMetric

f(x: numpy.ndarray) → float[source]

The deterministic function that produces the metric outcomes.

Factorial

class ax.metrics.factorial.FactorialMetric(name: str, coefficients: Dict[str, Dict[Union[str, bool, float, int, None], float]], batch_size: int = 10000, noise_var: float = 0.0)[source]

Bases: ax.core.metric.Metric

Metric for testing factorial designs assuming a main effects only logit model.

clone() → ax.metrics.factorial.FactorialMetric[source]

Create a copy of this Metric.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, **kwargs: Any) → ax.core.data.Data[source]

Fetch data for one trial.

classmethod is_available_while_running() → bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.

ax.metrics.factorial.evaluation_function(parameterization: Dict[str, Union[str, bool, float, int, None]], coefficients: Dict[str, Dict[Union[str, bool, float, int, None], float]], weight: float = 1.0, batch_size: int = 10000, noise_var: float = 0.0) → Tuple[float, float][source]

Hartmann6

class ax.metrics.hartmann6.AugmentedHartmann6Metric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray) → float[source]

The deterministic function that produces the metric outcomes.

class ax.metrics.hartmann6.Hartmann6Metric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray) → float[source]

The deterministic function that produces the metric outcomes.

L2 Norm

class ax.metrics.l2norm.L2NormMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.metrics.noisy_function.NoisyFunctionMetric

f(x: numpy.ndarray) → float[source]

The deterministic function that produces the metric outcomes.

Noisy Functions

class ax.metrics.noisy_function.NoisyFunctionMetric(name: str, param_names: List[str], noise_sd: float = 0.0, lower_is_better: Optional[bool] = None)[source]

Bases: ax.core.metric.Metric

A metric defined by a generic deterministic function, with normal noise with mean 0 and mean_sd scale added to the result.

clone() → ax.metrics.noisy_function.NoisyFunctionMetric[source]

Create a copy of this Metric.

f(x: numpy.ndarray) → float[source]

The deterministic function that produces the metric outcomes.

fetch_trial_data(trial: ax.core.base_trial.BaseTrial, noisy: bool = True, **kwargs: Any) → ax.core.data.Data[source]

Fetch data for one trial.

classmethod is_available_while_running() → bool[source]

Whether metrics of this class are available while the trial is running. Metrics that are not available while the trial is running are assumed to be available only upon trial completion. For such metrics, data is assumed to never change once the trial is completed.

NOTE: If this method returns False, data-fetching via experiment.fetch_data will return the data cached on the experiment (for the metrics of the given class) whenever its available. Data is cached on experiment when attached via experiment.attach_data.