{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Loop API Example on Hartmann6\n", "\n", "The loop API is the most lightweight way to do optimization in Ax. The user makes one call to `optimize`, which performs all of the optimization under the hood and returns the optimized parameters.\n", "\n", "For more customizability of the optimization procedure, consider the Service or Developer API." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "[INFO 05-07 12:56:33] ipy_plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.\n" ] } ], "source": [ "import numpy as np\n", "\n", "from ax.plot.contour import plot_contour\n", "from ax.plot.trace import optimization_trace_single_method\n", "from ax.service.managed_loop import optimize\n", "from ax.metrics.branin import branin\n", "from ax.utils.measurement.synthetic_functions import hartmann6\n", "from ax.utils.notebook.plotting import render, init_notebook_plotting\n", "\n", "init_notebook_plotting()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Define evaluation function\n", "\n", "First, we define an evaluation function that is able to compute all the metrics needed for this experiment. This function needs to accept a set of parameter values and can also accept a weight. It should produce a dictionary of metric names to tuples of mean and standard error for those metrics." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def hartmann_evaluation_function(parameterization):\n", " x = np.array([parameterization.get(f\"x{i+1}\") for i in range(6)])\n", " # In our case, standard error is 0, since we are computing a synthetic function.\n", " return {\"hartmann6\": (hartmann6(x), 0.0), \"l2norm\": (np.sqrt((x ** 2).sum()), 0.0)}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If there is only one metric in the experiment – the objective – then evaluation function can return a single tuple of mean and SEM, in which case Ax will assume that evaluation corresponds to the objective. It can also return only the mean as a float, in which case Ax will assume that SEM is 0.0. For more details on evaluation function, refer to the \"Trial Evaluation\" section in the docs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Run optimization\n", "The setup for the loop is fully compatible with JSON. The optimization algorithm is selected based on the properties of the problem search space." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "[INFO 05-07 12:56:33] ax.service.utils.dispatch: Using Bayesian Optimization generation strategy. Iterations after 6 will take longer to generate due to model-fitting.\n", "[INFO 05-07 12:56:33] ax.service.managed_loop: Started full optimization with 30 steps.\n", "[INFO 05-07 12:56:33] ax.service.managed_loop: Running optimization trial 1...\n", "[INFO 05-07 12:56:33] ax.service.managed_loop: Running optimization trial 2...\n", "[INFO 05-07 12:56:33] ax.service.managed_loop: Running optimization trial 3...\n", "[INFO 05-07 12:56:33] ax.service.managed_loop: Running optimization trial 4...\n", "[INFO 05-07 12:56:33] ax.service.managed_loop: Running optimization trial 5...\n", "[INFO 05-07 12:56:33] ax.service.managed_loop: Running optimization trial 6...\n", "[INFO 05-07 12:56:33] ax.service.managed_loop: Running optimization trial 7...\n", "[INFO 05-07 12:56:43] ax.service.managed_loop: Running optimization trial 8...\n", "[INFO 05-07 12:56:54] ax.service.managed_loop: Running optimization trial 9...\n", "[INFO 05-07 12:57:06] ax.service.managed_loop: Running optimization trial 10...\n", "[INFO 05-07 12:57:19] ax.service.managed_loop: Running optimization trial 11...\n", "[INFO 05-07 12:57:30] ax.service.managed_loop: Running optimization trial 12...\n", "[INFO 05-07 12:57:45] ax.service.managed_loop: Running optimization trial 13...\n", "[INFO 05-07 12:57:59] ax.service.managed_loop: Running optimization trial 14...\n", "[INFO 05-07 12:58:12] ax.service.managed_loop: Running optimization trial 15...\n", "[INFO 05-07 12:58:24] ax.service.managed_loop: Running optimization trial 16...\n", "[INFO 05-07 12:58:38] ax.service.managed_loop: Running optimization trial 17...\n", "[INFO 05-07 12:58:54] ax.service.managed_loop: Running optimization trial 18...\n", "[INFO 05-07 12:59:14] ax.service.managed_loop: Running optimization trial 19...\n", "[INFO 05-07 12:59:34] ax.service.managed_loop: Running optimization trial 20...\n", "[INFO 05-07 12:59:53] ax.service.managed_loop: Running optimization trial 21...\n", "[INFO 05-07 13:00:05] ax.service.managed_loop: Running optimization trial 22...\n", "[INFO 05-07 13:00:20] ax.service.managed_loop: Running optimization trial 23...\n", "[INFO 05-07 13:00:38] ax.service.managed_loop: Running optimization trial 24...\n", "[INFO 05-07 13:00:51] ax.service.managed_loop: Running optimization trial 25...\n", "[INFO 05-07 13:01:07] ax.service.managed_loop: Running optimization trial 26...\n", "[INFO 05-07 13:01:23] ax.service.managed_loop: Running optimization trial 27...\n", "[INFO 05-07 13:01:47] ax.service.managed_loop: Running optimization trial 28...\n", "[INFO 05-07 13:02:05] ax.service.managed_loop: Running optimization trial 29...\n", "[INFO 05-07 13:02:13] ax.service.managed_loop: Running optimization trial 30...\n" ] } ], "source": [ "best_parameters, values, experiment, model = optimize(\n", " parameters=[\n", " {\n", " \"name\": \"x1\",\n", " \"type\": \"range\",\n", " \"bounds\": [0.0, 1.0],\n", " \"value_type\": \"float\", # Optional, defaults to inference from type of \"bounds\".\n", " \"log_scale\": False, # Optional, defaults to False.\n", " },\n", " {\n", " \"name\": \"x2\",\n", " \"type\": \"range\",\n", " \"bounds\": [0.0, 1.0],\n", " },\n", " {\n", " \"name\": \"x3\",\n", " \"type\": \"range\",\n", " \"bounds\": [0.0, 1.0],\n", " },\n", " {\n", " \"name\": \"x4\",\n", " \"type\": \"range\",\n", " \"bounds\": [0.0, 1.0],\n", " },\n", " {\n", " \"name\": \"x5\",\n", " \"type\": \"range\",\n", " \"bounds\": [0.0, 1.0],\n", " },\n", " {\n", " \"name\": \"x6\",\n", " \"type\": \"range\",\n", " \"bounds\": [0.0, 1.0],\n", " },\n", " ],\n", " experiment_name=\"test\",\n", " objective_name=\"hartmann6\",\n", " evaluation_function=hartmann_evaluation_function,\n", " minimize=True, # Optional, defaults to False.\n", " parameter_constraints=[\"x1 + x2 <= 20\"], # Optional.\n", " outcome_constraints=[\"l2norm <= 1.25\"], # Optional.\n", " total_trials=30, # Optional.\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can introspect optimization results:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'x1': 0.4216336684550585,\n", " 'x2': 0.9077372149314975,\n", " 'x3': 0.3153028268916916,\n", " 'x4': 0.5733001784328788,\n", " 'x5': 0.2680636783388968,\n", " 'x6': 0.06285915210168797}" ] }, "execution_count": 4, "metadata": { "bento_obj_id": "139701375840904" }, "output_type": "execute_result" } ], "source": [ "best_parameters" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'l2norm': 1.2270530489376026, 'hartmann6': -3.0942722656221813}" ] }, "execution_count": 5, "metadata": { "bento_obj_id": "139701168862336" }, "output_type": "execute_result" } ], "source": [ "means, covariances = values\n", "means" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For comparison, minimum of Hartmann6 is:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-3.32237" ] }, "execution_count": 6, "metadata": { "bento_obj_id": "139701419551552" }, "output_type": "execute_result" } ], "source": [ "hartmann6.fmin" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Plot results\n", "Here we arbitrarily select \"x1\" and \"x2\" as the two parameters to plot for both metrics, \"hartmann6\" and \"l2norm\"." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "render(plot_contour(model=model, param_x='x1', param_y='x2', metric_name='hartmann6'))" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "render(plot_contour(model=model, param_x='x1', param_y='x2', metric_name='l2norm'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also plot optimization trace, which shows best hartmann6 objective value seen by each iteration of the optimization:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# `plot_single_method` expects a 2-d array of means, because it expects to average means from multiple \n", "# optimization runs, so we wrap out best objectives array in another array.\n", "best_objectives = np.array([[trial.objective_mean for trial in experiment.trials.values()]])\n", "best_objective_plot = optimization_trace_single_method(\n", " y=np.minimum.accumulate(best_objectives, axis=1),\n", " optimum=hartmann6.fmin,\n", " title=\"Model performance vs. # of iterations\",\n", " ylabel=\"Hartmann6\",\n", ")\n", "render(best_objective_plot)" ] } ], "metadata": { "kernelspec": { "display_name": "python3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }