{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Developer API Example on Hartmann6\n", "\n", "The Developer API is suitable when the user wants maximal customization of the optimization loop. This tutorial demonstrates optimization of a Hartmann6 function using the `SimpleExperiment` construct, which we use for synchronous experiments, where trials can be evaluated right away.\n", "\n", "For more details on the different Ax constructs, see the \"Building Blocks of Ax\" tutorial." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "[INFO 12-26 23:31:35] ipy_plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.\n" ] }, { "data": { "text/html": [ " \n", " " ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import numpy as np\n", "from ax import (\n", " ComparisonOp,\n", " ParameterType, \n", " RangeParameter,\n", " SearchSpace, \n", " SimpleExperiment, \n", " OutcomeConstraint, \n", ")\n", "from ax.metrics.l2norm import L2NormMetric\n", "from ax.modelbridge.registry import Models\n", "from ax.plot.contour import plot_contour\n", "from ax.plot.trace import optimization_trace_single_method\n", "from ax.utils.measurement.synthetic_functions import hartmann6\n", "from ax.utils.notebook.plotting import render, init_notebook_plotting\n", "\n", "init_notebook_plotting()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Define evaluation function\n", "\n", "First, we define an evaluation function that is able to compute all the metrics needed for this experiment. This function needs to accept a set of parameter values and can also accept a weight. It should produce a dictionary of metric names to tuples of mean and standard error for those metrics. Note that when using `Experiment` (instead of `SimpleExperiment`), it's possible to deploy trials and fetch their evaluation results asynchronously; more on that in the \"Building Blocks of Ax\" tutorial." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def hartmann_evaluation_function(\n", " parameterization, # Mapping of parameter names to values of those parameters.\n", " weight=None, # Optional weight argument.\n", "):\n", " x = np.array([parameterization.get(f\"x{i}\") for i in range(6)])\n", " # In our case, standard error is 0, since we are computing a synthetic function.\n", " return {\"hartmann6\": (hartmann6(x), 0.0), \"l2norm\": (np.sqrt((x ** 2).sum()), 0.0)}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If there is only one metric in the experiment – the objective – then evaluation function can return a single tuple of mean and SEM, in which case Ax will assume that evaluation corresponds to the objective. It can also return only the mean as a float, in which case Ax will treat SEM as unknown and use a model that can infer it. For more details on evaluation function, refer to the \"Trial Evaluation\" section in the docs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Create Search Space\n", "\n", "Second, we define a search space, which defines the type and allowed range for the parameters." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "hartmann_search_space = SearchSpace(\n", " parameters=[\n", " RangeParameter(\n", " name=f\"x{i}\", parameter_type=ParameterType.FLOAT, lower=0.0, upper=1.0\n", " )\n", " for i in range(6)\n", " ]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Create Experiment\n", "\n", "Third, we make a `SimpleExperiment`. In addition to the search space and evaluation function, here we define the `objective_name` and `outcome_constraints`.\n", "\n", "When doing the optimization, we will find points that minimize the objective while obeying the constraints (which in this case means `l2norm < 1.25`)." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "exp = SimpleExperiment(\n", " name=\"test_branin\",\n", " search_space=hartmann_search_space,\n", " evaluation_function=hartmann_evaluation_function,\n", " objective_name=\"hartmann6\",\n", " minimize=True,\n", " outcome_constraints=[\n", " OutcomeConstraint(\n", " metric=L2NormMetric(\n", " name=\"l2norm\", param_names=[f\"x{i}\" for i in range(6)], noise_sd=0.2\n", " ),\n", " op=ComparisonOp.LEQ,\n", " bound=1.25,\n", " relative=False,\n", " )\n", " ],\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Perform Optimization\n", "\n", "Run the optimization using the settings defined on the experiment. We will create 5 random sobol points for exploration followed by 15 points generated using the GPEI optimizer." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running Sobol initialization trials...\n", "Running GP+EI optimization trial 1/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 2/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 3/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 4/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 5/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 6/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 7/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 8/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 9/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 10/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 11/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 12/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 13/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 14/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 15/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 16/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 17/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 18/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 19/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 20/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 21/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 22/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 23/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 24/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running GP+EI optimization trial 25/15...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Done!\n" ] } ], "source": [ "print(f\"Running Sobol initialization trials...\")\n", "sobol = Models.SOBOL(exp.search_space)\n", "for i in range(5):\n", " exp.new_trial(generator_run=sobol.gen(1))\n", " \n", "for i in range(25):\n", " print(f\"Running GP+EI optimization trial {i+1}/15...\")\n", " # Reinitialize GP+EI model at each step with updated data.\n", " gpei = Models.BOTORCH(experiment=exp, data=exp.eval())\n", " batch = exp.new_trial(generator_run=gpei.gen(1))\n", " \n", "print(\"Done!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Inspect trials' data\n", "\n", "Now we can inspect the `SimpleExperiment`'s data by calling `eval()`, which retrieves evaluation data for all batches of the experiment.\n", "\n", "We can also use the `eval_trial` function to get evaluation data for a specific trial in the experiment, like so:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | arm_name | \n", "metric_name | \n", "mean | \n", "sem | \n", "trial_index | \n", "
---|---|---|---|---|---|
0 | \n", "1_0 | \n", "hartmann6 | \n", "-0.048067 | \n", "0.0 | \n", "1 | \n", "
1 | \n", "1_0 | \n", "l2norm | \n", "1.438130 | \n", "0.0 | \n", "1 | \n", "