ax.utils¶
Common¶
Equality¶
-
ax.utils.common.equality.
datetime_equals
(dt1, dt2)[source]¶ Compare equality of two datetimes, ignoring microseconds.
Return type: bool
-
ax.utils.common.equality.
equality_typechecker
(eq_func)[source]¶ A decorator to wrap all __eq__ methods to ensure that the inputs are of the right type.
Return type: Callable
-
ax.utils.common.equality.
same_elements
(list1, list2)[source]¶ Compare equality of two lists of core Ax objects.
- Assumptions:
- – The contents of each list are types that implement __eq__ – The lists do not contain duplicates
Checking equality is then the same as checking that the lists are the same length, and that one is a subset of the other.
Return type: bool
Logger¶
Docutils¶
Support functions for sphinx et. al
-
ax.utils.common.docutils.
copy_doc
(src)[source]¶ A decorator that copies the docstring of another object
Since
sphinx
actually loads the python modules to grab the docstrings this works with bothsphinx
and thehelp
function.class Cat(Mamal): @property @copy_doc(Mamal.is_feline) def is_feline(self) -> true: ...
Return type: ~_T
Typeutils¶
-
ax.utils.common.typeutils.
checked_cast
(typ, val)[source]¶ Cast a value to a type (with a runtime safety check).
Returns the value unchanged and checks its type at runtime. This signals to the typechecker that the value has the designated type.
Like typing.cast
check_cast
performs no runtime conversion on its argument, but, unliketyping.cast
,checked_cast
will throw an error if the value is not of the expected type. The type passed as an argument should be a python class.Parameters: - typ (
Type
[~V]) – the type to cast to - val (~V) – the value that we are casting
Return type: ~T
Returns: the
val
argument, unchanged- typ (
-
ax.utils.common.typeutils.
checked_cast_list
(typ, l)[source]¶ Calls checked_cast on all items in a list.
Return type: List
[~T]
-
ax.utils.common.typeutils.
checked_cast_optional
(typ, val)[source]¶ Calls checked_cast only if value is not None.
Return type: Optional
[~T]
Notebook¶
Report¶
Render¶
-
ax.utils.report.render.
list_item_html
(text)[source]¶ Embed text in list element tag.
Return type: str
-
ax.utils.report.render.
render_report_elements
(experiment_name, html_elements, header=True, offline=False)[source]¶ Generate Ax HTML report for a given experiment from HTML elements.
Uses Jinja2 for template. Injects Plotly JS for graph rendering.
Example:
html_elements = [ h2_html("Subsection with plot"), p_html("This is an example paragraph."), plot_html(plot_fitted(gp_model, 'perf_metric')), h2_html("Subsection with table"), pandas_html(data.df), ] html = render_report_elements('My experiment', html_elements)
Parameters: - experiment_name (
str
) – the name of the experiment to use for title. - html_elements (
List
[str
]) – list of HTML strings to render in report body. - header (
bool
) – if True, render experiment title as a header. Meant to be used for standalone reports (e.g. via email), as opposed to served on the front-end. - offline (
bool
) – if True, entire Plotly library is bundled with report.
Returns: HTML string.
Return type: - experiment_name (
Stats¶
Statstools¶
-
ax.utils.stats.statstools.
agresti_coull_sem
(n_numer, n_denom, prior_successes=2, prior_failures=2)[source]¶ Compute the Agresti-Coull style standard error for a binomial proportion.
Reference: Agresti, Alan, and Brent A. Coull. Approximate Is Better than ‘Exact’ for Interval Estimation of Binomial Proportions.” The American Statistician, vol. 52, no. 2, 1998, pp. 119-126. JSTOR, www.jstor.org/stable/2685469.
Return type: Union
[ndarray
,float
]
-
ax.utils.stats.statstools.
inverse_variance_weight
(means, variances, conflicting_noiseless='warn')[source]¶ Perform inverse variance weighting.
Parameters: - means (
ndarray
) – The means of the observations. - variances (
ndarray
) – The variances of the observations. - conflicting_noiseless (
str
) – How to handle the case of multiple observations with zero variance but different means. Options are “warn” (default), “ignore” or “raise”.
Return type: - means (
-
ax.utils.stats.statstools.
marginal_effects
(df)[source]¶ This method calculates the relative (in %) change in the outcome achieved by using any individual factor level versus randomizing across all factor levels. It does this by estimating a baseline under the experiment by marginalizing over all factors/levels. For each factor level, then, it conditions on that level for the individual factor and then marginalizes over all levels for all other factors.
Parameters: df ( DataFrame
) – Dataframe containing columns named mean and sem. All other columns are assumed to be factors for which to calculate marginal effects.Return type: DataFrame
Returns: - A dataframe containing columns “Name”, “Level”, “Beta” and “SE”
- corresponding to the factor, level, effect and standard error. Results are relativized as percentage changes.
-
ax.utils.stats.statstools.
positive_part_james_stein
(means, sems)[source]¶ Estimation method for Positive-part James-Stein estimator.
This method takes a vector of K means (y_i) and standard errors (sigma_i) and calculates the positive-part James Stein estimator.
Resulting estimates are the shrunk means and standard errors. The positive part James-Stein estimator shrinks each constituent average to the grand average:
y_i - phi_i * y_i + phi_i * ybarThe variable phi_i determines the amount of shrinkage. For phi_i = 1, mu_hat is equal to ybar (the mean of all y_i), while for phi_i = 0, mu_hat is equal to y_i. It can be shown that restricting phi_i <= 1 dominates the unrestricted estimator, so this method restricts phi_i in this manner. The amount of shrinkage, phi_i, is determined by:
(K - 3) * sigma2_i / s2That is, less shrinkage is applied when individual means are estimated with greater precision, and more shrinkage is applied when individual means are very tightly clustered together. We also restrict phi_i to never be larger than 1.
The variance of the mean estimator is:
(1 - phi_i) * sigma2_i + phi * sigma2_i / K + 2 * phi_i ** 2 * (y_i - ybar)^2 / (K - 3)The first term is the variance component from y_i, the second term is the contribution from the mean of all y_i, and the third term is the contribution from the uncertainty in the sum of squared deviations of y_i from the mean of all y_i.
For more information, see https://fburl.com/empirical_bayes.
Parameters: Returns: Empirical Bayes estimate of each arm’s mean sem_i: Empirical Bayes estimate of each arm’s sem
Return type: mu_hat_i
-
ax.utils.stats.statstools.
relativize
(means_t, sems_t, mean_c, sem_c, bias_correction=True, cov_means=0.0, as_percent=False)[source]¶ Ratio estimator based on the delta method.
This uses the delta method (i.e. a Taylor series approximation) to estimate the mean and standard deviation of the sampling distribution of the ratio between test and control – that is, the sampling distribution of an estimator of the true population value under the assumption that the means in test and control have a known covariance:
(mu_t / mu_c) - 1.Under a second-order Taylor expansion, the sampling distribution of the relative change in empirical means, which is m_t / m_c - 1, is approximately normally distributed with mean
[(mu_t - mu_c) / mu_c] - [(sigma_c)^2 * mu_t] / (mu_c)^3and variance
(sigma_t / mu_c)^2 - 2 * mu_t _ sigma_tc / mu_c^3 + [(sigma_c * mu_t)^2 / (mu_c)^4]as the higher terms are assumed to be close to zero in the full Taylor series. To estimate these parameters, we plug in the empirical means and standard errors. This gives us the estimators:
[(m_t - m_c) / m_c] - [(s_c)^2 * m_t] / (m_c)^3and
(s_t / m_c)^2 - 2 * m_t * s_tc / m_c^3 + [(s_c * m_t)^2 / (m_c)^4]Note that the delta method does NOT take as input the empirical standard deviation of a metric, but rather the standard error of the mean of that metric – that is, the standard deviation of the metric after division by the square root of the total number of observations.
Parameters: - means_t (
Union
[ndarray
,List
[float
],float
]) – Sample means (test) - sems_t (
Union
[ndarray
,List
[float
],float
]) – Sample standard errors of the means (test) - mean_c (
float
) – Sample mean (control) - sem_c (
float
) – Sample standard error of the mean (control) - cov_means (
Union
[ndarray
,List
[float
],float
]) – Sample covariance between test and control - as_percent (
bool
) – If true, return results in percent (* 100)
Returns: - Inferred means of the sampling distribution of
the relative change (mean_t / mean_c) - 1
- sem_hat: Inferred standard deviation of the sampling
distribution of rel_hat – i.e. the standard error.
Return type: rel_hat
- means_t (