{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Developer API Example on Hartmann6\n", "\n", "The Developer API is suitable when the user wants maximal customization of the optimization loop. This tutorial demonstrates optimization of a Hartmann6 function using the `SimpleExperiment` construct.\n", "\n", "For more details on the different Ax constructs, see the \"Building Blocks of Ax\" tutorial." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "[INFO 04-24 19:17:50] ipy_plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.\n" ] } ], "source": [ "import numpy as np\n", "from ax import (\n", " ComparisonOp,\n", " ParameterType, \n", " RangeParameter,\n", " SearchSpace, \n", " SimpleExperiment, \n", " OutcomeConstraint, \n", ")\n", "from ax.metrics.l2norm import L2NormMetric\n", "from ax.modelbridge.factory import Models\n", "from ax.plot.contour import plot_contour\n", "from ax.plot.trace import optimization_trace_single_method\n", "from ax.utils.notebook.plotting import render, init_notebook_plotting\n", "\n", "init_notebook_plotting()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Define evaluation function\n", "\n", "First, we define an evaluation function that is able to compute all the metrics needed for this experiment. This function needs to accept a set of parameter values and a weight. It should produce a dictionary of metric names to tuples of mean and standard error for those metrics." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def hartmann6(x: np.ndarray) -> float:\n", " alpha = np.array([1.0, 1.2, 3.0, 3.2])\n", " A = np.array(\n", " [\n", " [10, 3, 17, 3.5, 1.7, 8],\n", " [0.05, 10, 17, 0.1, 8, 14],\n", " [3, 3.5, 1.7, 10, 17, 8],\n", " [17, 8, 0.05, 10, 0.1, 14],\n", " ]\n", " )\n", " P = 10 ** (-4) * np.array(\n", " [\n", " [1312, 1696, 5569, 124, 8283, 5886],\n", " [2329, 4135, 8307, 3736, 1004, 9991],\n", " [2348, 1451, 3522, 2883, 3047, 6650],\n", " [4047, 8828, 8732, 5743, 1091, 381],\n", " ]\n", " )\n", " y = 0.0\n", " for j, alpha_j in enumerate(alpha):\n", " t = 0\n", " for k in range(6):\n", " t += A[j, k] * ((x[k] - P[j, k]) ** 2)\n", " y -= alpha_j * np.exp(-t)\n", " return y" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def hartmann_evaluation_function(\n", " parameterization, # dict of parameter names to values of those parameters\n", " weight=None, # evaluation function signature requires a weight argument\n", "):\n", " x = np.array([parameterization.get(f\"x{i}\") for i in range(6)])\n", " # In our case, standard error is 0, since we are computing a synthetic function.\n", " return {\"hartmann6\": (hartmann6(x), 0.0), \"l2norm\": (np.sqrt((x ** 2).sum()), 0.0)}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Create Search Space\n", "\n", "Second, we define a search space, which defines the type and allowed range for the parameters." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "hartmann_search_space = SearchSpace(\n", " parameters=[\n", " RangeParameter(\n", " name=f\"x{i}\", parameter_type=ParameterType.FLOAT, lower=0.0, upper=1.0\n", " )\n", " for i in range(6)\n", " ]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Create Experiment\n", "\n", "Third, we make a `SimpleExperiment`. In addition to the search space and evaluation function, here we define the `objective_name` and `outcome_constraints`.\n", "\n", "When doing the optimization, we will find points that minimize the objective while obeying the constraints (which in this case means `l2norm < 1.25`)." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "exp = SimpleExperiment(\n", " name=\"test_branin\",\n", " search_space=hartmann_search_space,\n", " evaluation_function=hartmann_evaluation_function,\n", " objective_name=\"hartmann6\",\n", " minimize=True,\n", " outcome_constraints=[\n", " OutcomeConstraint(\n", " metric=L2NormMetric(\n", " name=\"l2norm\", param_names=[f\"x{i}\" for i in range(6)], noise_sd=0.2\n", " ),\n", " op=ComparisonOp.LEQ,\n", " bound=1.25,\n", " relative=False,\n", " )\n", " ],\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Perform Optimization\n", "\n", "Run the optimization using the settings defined on the experiment. We will create 5 random sobol points for exploration followed by 15 points generated using the GPEI optimizer." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running Sobol initialization trials...\n", "Running GP+EI optimization trial 1/15...\n", "Running GP+EI optimization trial 2/15...\n", "Running GP+EI optimization trial 3/15...\n", "Running GP+EI optimization trial 4/15...\n", "Running GP+EI optimization trial 5/15...\n", "Running GP+EI optimization trial 6/15...\n", "Running GP+EI optimization trial 7/15...\n", "Running GP+EI optimization trial 8/15...\n", "Running GP+EI optimization trial 9/15...\n", "Running GP+EI optimization trial 10/15...\n", "Running GP+EI optimization trial 11/15...\n", "Running GP+EI optimization trial 12/15...\n", "Running GP+EI optimization trial 13/15...\n", "Running GP+EI optimization trial 14/15...\n", "Running GP+EI optimization trial 15/15...\n", "Done!\n" ] } ], "source": [ "print(f\"Running Sobol initialization trials...\")\n", "sobol = Models.SOBOL(exp.search_space)\n", "for i in range(5):\n", " exp.new_trial(generator_run=sobol.gen(1))\n", " \n", "for i in range(15):\n", " print(f\"Running GP+EI optimization trial {i+1}/15...\")\n", " # Reinitialize GP+EI model at each step with updated data.\n", " gpei = Models.GPEI(experiment=exp, data=exp.eval())\n", " batch = exp.new_trial(generator_run=gpei.gen(1))\n", " \n", "print(\"Done!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Inspect trials' data\n", "\n", "Now we can inspect the `SimpleExperiment`'s data by calling `eval()`, which retrieves evaluation data for all batches of the experiment.\n", "\n", "We can also use the `eval_trial` function to get evaluation data for a specific trial in the experiment, like so:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "application/vnd.dataresource+json": { "data": [ { "arm_name": "1_0", "index": 0, "mean": -0.4089124876, "metric_name": "hartmann6", "sem": 0, "trial_index": 1 }, { "arm_name": "1_0", "index": 1, "mean": 1.5707256297, "metric_name": "l2norm", "sem": 0, "trial_index": 1 } ], "schema": { "fields": [ { "name": "index", "type": "integer" }, { "name": "arm_name", "type": "string" }, { "name": "mean", "type": "number" }, { "name": "metric_name", "type": "string" }, { "name": "sem", "type": "number" }, { "name": "trial_index", "type": "integer" } ], "pandas_version": "0.20.0", "primaryKey": [ "index" ] } }, "text/html": [ "
\n", " | arm_name | \n", "mean | \n", "metric_name | \n", "sem | \n", "trial_index | \n", "
---|---|---|---|---|---|
0 | \n", "1_0 | \n", "-0.408912 | \n", "hartmann6 | \n", "0.0 | \n", "1 | \n", "
1 | \n", "1_0 | \n", "1.570726 | \n", "l2norm | \n", "0.0 | \n", "1 | \n", "