Ax integrates easily with different scheduling frameworks and distributed training frameworks. In this example, Ax-driven optimization is executed in a distributed fashion using RayTune.
RayTune is a scalable framework for hyperparameter tuning that provides many state-of-the-art hyperparameter tuning algorithms and seamlessly scales from laptop to distributed cluster with fault tolerance. RayTune leverages Ray's Actor API to provide asynchronous parallel and distributed execution.
Ray 'Actors' are a simple and clean abstraction for replicating your Python classes across multiple workers and nodes. Each hyperparameter evaluation is asynchronously executed on a separate Ray actor and reports intermediate training progress back to RayTune. Upon reporting, RayTune then uses this information to performs actions such as early termination, re-prioritization, or checkpointing.
import logging
from ray import tune
from ray.tune import track
from ray.tune.suggest.ax import AxSearch
logger = logging.getLogger(tune.__name__)
logger.setLevel(level=logging.CRITICAL) # Reduce the number of Ray warnings that are not relevant here.
import torch
import numpy as np
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.ax_client import AxClient
from ax.utils.notebook.plotting import render, init_notebook_plotting
from ax.utils.tutorials.cnn_utils import CNN, load_mnist, train, evaluate
init_notebook_plotting()
[INFO 07-17 16:25:22] ipy_plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
We specify enforce_sequential_optimization
as False, because Ray runs many trials in parallel. With the sequential optimization enforcement, AxClient
would expect the first few trials to be completed with data before generating more trials.
When high parallelism is not required, it is best to enforce sequential optimization, as it allows for achieving optimal results in fewer (but sequential) trials. In cases where parallelism is important, such as with distributed training using Ray, we choose to forego minimizing resource utilization and run more trials in parallel.
ax = AxClient(enforce_sequential_optimization=False)
[INFO 07-17 16:25:22] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 2 decimal points.
Here we set up the search space and specify the objective; refer to the Ax API tutorials for more detail.
ax.create_experiment(
name="mnist_experiment",
parameters=[
{"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True},
{"name": "momentum", "type": "range", "bounds": [0.0, 1.0]},
],
objective_name="mean_accuracy",
)
[INFO 07-17 16:25:22] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 5 trials, GPEI for subsequent trials]). Iterations after 5 will take longer to generate due to model-fitting.
Since we use the Ax Service API here, we evaluate the parameterizations that Ax suggests, using RayTune. The evaluation function follows its usual pattern, taking in a parameterization and outputting an objective value. For detail on evaluation functions, see Trial Evaluation.
def train_evaluate(parameterization):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_loader, valid_loader, test_loader = load_mnist(data_path='~/.data')
net = train(net=CNN(), train_loader=train_loader, parameters=parameterization, dtype=torch.float, device=device)
track.log(
mean_accuracy=evaluate(
net=net,
data_loader=valid_loader,
dtype=torch.float,
device=device,
)
)
Execute the Ax optimization and trial evaluation in RayTune using AxSearch algorithm:
tune.run(
train_evaluate,
num_samples=30,
search_alg=AxSearch(ax), # Note that the argument here is the `AxClient`.
verbose=0, # Set this level to 1 to see status updates and to 2 to also see trial results.
# To use GPU, specify: resources_per_trial={"gpu": 1}.
)
2020-07-17 16:25:22,531 INFO resource_spec.py:212 -- Starting Ray with 4.35 GiB memory available for workers and up to 2.18 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-07-17 16:25:22,755 WARNING services.py:923 -- Redis failed to start, retrying now.
2020-07-17 16:25:23,011 INFO services.py:1165 -- View the Ray dashboard at localhost:8265
[INFO 07-17 16:25:23] ax.service.ax_client: Generated new trial 0 with parameters {'lr': 0.14, 'momentum': 0.98}.
[INFO 07-17 16:25:23] ax.service.ax_client: Generated new trial 1 with parameters {'lr': 0.04, 'momentum': 0.11}.
[INFO 07-17 16:25:23] ax.service.ax_client: Generated new trial 2 with parameters {'lr': 0.3, 'momentum': 0.58}.
[INFO 07-17 16:25:23] ax.service.ax_client: Generated new trial 3 with parameters {'lr': 0.0, 'momentum': 0.84}.
[INFO 07-17 16:25:23] ax.service.ax_client: Generated new trial 4 with parameters {'lr': 0.0, 'momentum': 0.77}.
[INFO 07-17 16:25:24] ax.service.ax_client: Generated new trial 5 with parameters {'lr': 0.0, 'momentum': 0.56}.
[INFO 07-17 16:25:24] ax.service.ax_client: Generated new trial 6 with parameters {'lr': 0.0, 'momentum': 0.47}.
[INFO 07-17 16:25:24] ax.service.ax_client: Generated new trial 7 with parameters {'lr': 0.03, 'momentum': 0.69}.
[INFO 07-17 16:25:24] ax.service.ax_client: Generated new trial 8 with parameters {'lr': 0.04, 'momentum': 0.95}.
[INFO 07-17 16:25:25] ax.service.ax_client: Generated new trial 9 with parameters {'lr': 0.0, 'momentum': 0.86}.
[INFO 07-17 16:25:25] ax.service.ax_client: Generated new trial 10 with parameters {'lr': 0.01, 'momentum': 0.4}.
[INFO 07-17 16:25:25] ax.service.ax_client: Generated new trial 11 with parameters {'lr': 0.0, 'momentum': 0.48}.
[INFO 07-17 16:25:25] ax.service.ax_client: Generated new trial 12 with parameters {'lr': 0.0, 'momentum': 0.69}.
[INFO 07-17 16:25:25] ax.service.ax_client: Generated new trial 13 with parameters {'lr': 0.0, 'momentum': 0.62}.
[INFO 07-17 16:25:25] ax.service.ax_client: Generated new trial 14 with parameters {'lr': 0.0, 'momentum': 0.96}.
[INFO 07-17 16:25:25] ax.service.ax_client: Generated new trial 15 with parameters {'lr': 0.0, 'momentum': 0.17}.
[INFO 07-17 16:25:26] ax.service.ax_client: Generated new trial 16 with parameters {'lr': 0.02, 'momentum': 0.44}.
[INFO 07-17 16:25:26] ax.service.ax_client: Generated new trial 17 with parameters {'lr': 0.01, 'momentum': 0.35}.
[INFO 07-17 16:25:26] ax.service.ax_client: Generated new trial 18 with parameters {'lr': 0.0, 'momentum': 0.05}.
[INFO 07-17 16:25:26] ax.service.ax_client: Generated new trial 19 with parameters {'lr': 0.0, 'momentum': 0.57}.
[INFO 07-17 16:25:26] ax.service.ax_client: Generated new trial 20 with parameters {'lr': 0.0, 'momentum': 0.46}.
[INFO 07-17 16:25:26] ax.service.ax_client: Generated new trial 21 with parameters {'lr': 0.0, 'momentum': 0.73}.
[INFO 07-17 16:25:26] ax.service.ax_client: Generated new trial 22 with parameters {'lr': 0.0, 'momentum': 0.17}.
[INFO 07-17 16:25:26] ax.service.ax_client: Generated new trial 23 with parameters {'lr': 0.0, 'momentum': 0.96}.
[INFO 07-17 16:25:27] ax.service.ax_client: Generated new trial 24 with parameters {'lr': 0.02, 'momentum': 0.7}.
[INFO 07-17 16:25:27] ax.service.ax_client: Generated new trial 25 with parameters {'lr': 0.03, 'momentum': 0.4}.
[INFO 07-17 16:25:27] ax.service.ax_client: Generated new trial 26 with parameters {'lr': 0.0, 'momentum': 0.13}.
[INFO 07-17 16:25:27] ax.service.ax_client: Generated new trial 27 with parameters {'lr': 0.0, 'momentum': 0.81}.
[INFO 07-17 16:25:27] ax.service.ax_client: Generated new trial 28 with parameters {'lr': 0.16, 'momentum': 0.87}.
[INFO 07-17 16:25:27] ax.service.ax_client: Generated new trial 29 with parameters {'lr': 0.08, 'momentum': 0.79}.
(pid=4093) Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /home/travis/.data/MNIST/raw/train-images-idx3-ubyte.gz (pid=4092) Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /home/travis/.data/MNIST/raw/train-images-idx3-ubyte.gz 0.1% 0.1% 2.7% 2.6% 2.8% 2.8% 2.9% 2.9% 3.0% 3.1% 3.1% 3.3% 27.6% 46.5% 46.9% 48.5% 91.8% 92.6% 93.3% 93.9% 94.2% 75.9% 100.1% (pid=4093) Extracting /home/travis/.data/MNIST/raw/train-images-idx3-ubyte.gz to /home/travis/.data/MNIST/raw 100.1% (pid=4092) Extracting /home/travis/.data/MNIST/raw/train-images-idx3-ubyte.gz to /home/travis/.data/MNIST/raw (pid=4093) Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to /home/travis/.data/MNIST/raw/train-labels-idx1-ubyte.gz (pid=4092) Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to /home/travis/.data/MNIST/raw/train-labels-idx1-ubyte.gz (pid=4093) Extracting /home/travis/.data/MNIST/raw/train-labels-idx1-ubyte.gz to /home/travis/.data/MNIST/raw (pid=4093) Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to /home/travis/.data/MNIST/raw/t10k-images-idx3-ubyte.gz (pid=4092) Extracting /home/travis/.data/MNIST/raw/train-labels-idx1-ubyte.gz to /home/travis/.data/MNIST/raw (pid=4092) Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to /home/travis/.data/MNIST/raw/t10k-images-idx3-ubyte.gz 113.5% 113.5% 0.5% 2.5% 40.2% 48.2% 49.2% 49.7% 41.7% 51.7% 44.2% 52.2% 53.7% 46.2% 57.6% 52.7% 60.6% 53.2% 61.6% 54.7% 64.6% 57.6% 65.6% (pid=4093) Extracting /home/travis/.data/MNIST/raw/t10k-images-idx3-ubyte.gz to /home/travis/.data/MNIST/raw (pid=4092) Extracting /home/travis/.data/MNIST/raw/t10k-images-idx3-ubyte.gz to /home/travis/.data/MNIST/raw 100.4% 100.4% (pid=4093) Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to /home/travis/.data/MNIST/raw/t10k-labels-idx1-ubyte.gz (pid=4092) Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to /home/travis/.data/MNIST/raw/t10k-labels-idx1-ubyte.gz (pid=4093) Extracting /home/travis/.data/MNIST/raw/t10k-labels-idx1-ubyte.gz to /home/travis/.data/MNIST/raw (pid=4093) Processing... 180.4% (pid=4093) /pytorch/torch/csrc/utils/tensor_numpy.cpp:141: UserWarning: (pid=4093) (pid=4093) The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (pid=4093) (pid=4093) Done! (pid=4092) Extracting /home/travis/.data/MNIST/raw/t10k-labels-idx1-ubyte.gz to /home/travis/.data/MNIST/raw (pid=4092) Processing... 180.4%/pytorch/torch/csrc/utils/tensor_numpy.cpp:141: UserWarning: (pid=4092) (pid=4092) The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (pid=4092) (pid=4092) Done!
[INFO 07-17 16:25:49] ax.service.ax_client: Completed trial 1 with data: {'mean_accuracy': (0.1, 0.0)}.
(pid=4093) 2020-07-17 16:25:49,299 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:25:50] ax.service.ax_client: Completed trial 0 with data: {'mean_accuracy': (0.11, 0.0)}.
(pid=4092) 2020-07-17 16:25:50,619 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:26:10] ax.service.ax_client: Completed trial 2 with data: {'mean_accuracy': (0.1, 0.0)}.
(pid=4179) 2020-07-17 16:26:10,913 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:26:12] ax.service.ax_client: Completed trial 3 with data: {'mean_accuracy': (0.7, 0.0)}.
(pid=4191) 2020-07-17 16:26:12,732 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:26:33] ax.service.ax_client: Completed trial 4 with data: {'mean_accuracy': (0.93, 0.0)}. 2020-07-17 16:26:33,331 WARNING worker.py:1047 -- The actor or task with ID fffffffffffffffffd0a71290100 is pending and cannot currently be scheduled. It requires {CPU: 1.000000} for execution and {CPU: 1.000000} for placement, but this node only has remaining {CPU: 1.000000}, {node:10.30.0.51: 1.000000}, {memory: 4.345703 GiB}, {object_store_memory: 1.464844 GiB}. In total there are 0 pending tasks and 1 pending actors on this node. This is likely due to all cluster resources being claimed by actors. To resolve the issue, consider creating fewer actors or increase the resources available to this Ray cluster. You can ignore this message if this Ray cluster is expected to auto-scale.
(pid=4237) 2020-07-17 16:26:33,229 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
2020-07-17 16:26:33,729 INFO (unknown file):0 -- gc.collect() freed 226 refs in 0.30226350999998886 seconds
(pid=4252) 2020-07-17 16:26:33,576 INFO (unknown file):0 -- gc.collect() freed 276 refs in 0.1346305319999601 seconds
[INFO 07-17 16:26:35] ax.service.ax_client: Completed trial 5 with data: {'mean_accuracy': (0.78, 0.0)}.
(pid=4252) 2020-07-17 16:26:35,249 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:26:55] ax.service.ax_client: Completed trial 6 with data: {'mean_accuracy': (0.95, 0.0)}.
(pid=4279) 2020-07-17 16:26:55,575 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:26:57] ax.service.ax_client: Completed trial 7 with data: {'mean_accuracy': (0.08, 0.0)}.
(pid=4294) 2020-07-17 16:26:57,209 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:27:17] ax.service.ax_client: Completed trial 8 with data: {'mean_accuracy': (0.1, 0.0)}.
(pid=4314) 2020-07-17 16:27:17,221 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:27:19] ax.service.ax_client: Completed trial 9 with data: {'mean_accuracy': (0.63, 0.0)}.
(pid=4329) 2020-07-17 16:27:19,245 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:27:39] ax.service.ax_client: Completed trial 10 with data: {'mean_accuracy': (0.11, 0.0)}.
(pid=4356) 2020-07-17 16:27:39,432 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:27:41] ax.service.ax_client: Completed trial 11 with data: {'mean_accuracy': (0.77, 0.0)}.
(pid=4371) 2020-07-17 16:27:41,778 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:28:01] ax.service.ax_client: Completed trial 12 with data: {'mean_accuracy': (0.89, 0.0)}.
(pid=4393) 2020-07-17 16:28:01,682 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:28:04] ax.service.ax_client: Completed trial 13 with data: {'mean_accuracy': (0.57, 0.0)}.
(pid=4408) 2020-07-17 16:28:04,173 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:28:24] ax.service.ax_client: Completed trial 14 with data: {'mean_accuracy': (0.85, 0.0)}.
(pid=4429) 2020-07-17 16:28:24,285 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:28:26] ax.service.ax_client: Completed trial 15 with data: {'mean_accuracy': (0.9, 0.0)}.
(pid=4444) 2020-07-17 16:28:26,830 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:28:46] ax.service.ax_client: Completed trial 16 with data: {'mean_accuracy': (0.11, 0.0)}.
(pid=4471) 2020-07-17 16:28:46,283 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:28:48] ax.service.ax_client: Completed trial 17 with data: {'mean_accuracy': (0.11, 0.0)}.
(pid=4486) 2020-07-17 16:28:48,910 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:29:08] ax.service.ax_client: Completed trial 18 with data: {'mean_accuracy': (0.11, 0.0)}.
(pid=4507) 2020-07-17 16:29:08,896 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:29:11] ax.service.ax_client: Completed trial 19 with data: {'mean_accuracy': (0.93, 0.0)}.
(pid=4521) 2020-07-17 16:29:11,112 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:29:31] ax.service.ax_client: Completed trial 20 with data: {'mean_accuracy': (0.96, 0.0)}.
(pid=4542) 2020-07-17 16:29:31,504 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:29:33] ax.service.ax_client: Completed trial 21 with data: {'mean_accuracy': (0.09, 0.0)}.
(pid=4557) 2020-07-17 16:29:33,637 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:29:54] ax.service.ax_client: Completed trial 22 with data: {'mean_accuracy': (0.9, 0.0)}.
(pid=4586) 2020-07-17 16:29:54,301 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:29:56] ax.service.ax_client: Completed trial 23 with data: {'mean_accuracy': (0.86, 0.0)}.
(pid=4601) 2020-07-17 16:29:56,566 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:30:16] ax.service.ax_client: Completed trial 24 with data: {'mean_accuracy': (0.12, 0.0)}.
(pid=4622) 2020-07-17 16:30:16,757 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:30:19] ax.service.ax_client: Completed trial 25 with data: {'mean_accuracy': (0.1, 0.0)}.
(pid=4637) 2020-07-17 16:30:18,974 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:30:39] ax.service.ax_client: Completed trial 26 with data: {'mean_accuracy': (0.94, 0.0)}.
(pid=4657) 2020-07-17 16:30:39,189 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:30:41] ax.service.ax_client: Completed trial 27 with data: {'mean_accuracy': (0.97, 0.0)}.
(pid=4672) 2020-07-17 16:30:41,823 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:31:01] ax.service.ax_client: Completed trial 28 with data: {'mean_accuracy': (0.1, 0.0)}.
(pid=4700) 2020-07-17 16:31:01,610 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
[INFO 07-17 16:31:03] ax.service.ax_client: Completed trial 29 with data: {'mean_accuracy': (0.1, 0.0)}.
(pid=4715) 2020-07-17 16:31:03,242 WARNING __init__.py:20 -- tune.track.log is now deprecated. Use `tune.report` instead. This warning will throw an error in a future version of Ray.
<ray.tune.analysis.experiment_analysis.ExperimentAnalysis at 0x7f7d881f8128>
best_parameters, values = ax.get_best_parameters()
best_parameters
{'lr': 0.00027254970690960385, 'momentum': 0.8102878797799349}
means, covariances = values
means
{'mean_accuracy': 0.9661666666666666}
render(
plot_contour(
model=ax.generation_strategy.model, param_x='lr', param_y='momentum', metric_name='mean_accuracy'
)
)
/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3373: RuntimeWarning: Mean of empty slice. /home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
--------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-9-1ad1935854d6> in <module> 1 render( 2 plot_contour( ----> 3 model=ax.generation_strategy.model, param_x='lr', param_y='momentum', metric_name='mean_accuracy' 4 ) 5 ) ~/build/facebook/Ax/ax/plot/contour.py in plot_contour(model, param_x, param_y, metric_name, generator_runs_dict, relative, density, slice_values, lower_is_better, fixed_features, trial_index) 153 generator_runs_dict=generator_runs_dict, 154 density=density, --> 155 slice_values=slice_values, 156 ) 157 config = { ~/build/facebook/Ax/ax/plot/contour.py in _get_contour_predictions(model, x_param_name, y_param_name, metric, generator_runs_dict, density, slice_values, fixed_features) 95 param_grid_obsf.append(predf) 96 ---> 97 mu, cov = model.predict(param_grid_obsf) 98 99 f_plt = mu[metric] ~/build/facebook/Ax/ax/modelbridge/base.py in predict(self, observation_features) 493 # Predict in single batch. 494 try: --> 495 observation_data = self._batch_predict(observation_features) 496 # Predict one by one. 497 except (TypeError, ValueError): ~/build/facebook/Ax/ax/modelbridge/base.py in _batch_predict(self, observation_features) 423 ) 424 # Apply terminal transform and predict --> 425 observation_data = self._predict(observation_features) 426 427 # Apply reverse transforms, in reverse order ~/build/facebook/Ax/ax/modelbridge/random.py in _predict(self, observation_features) 102 output. 103 """ --> 104 raise NotImplementedError("RandomModelBridge does not support prediction.") 105 106 def _cross_validate( NotImplementedError: RandomModelBridge does not support prediction.
# `plot_single_method` expects a 2-d array of means, because it expects to average means from multiple
# optimization runs, so we wrap out best objectives array in another array.
best_objectives = np.array([[trial.objective_mean * 100 for trial in ax.experiment.trials.values()]])
best_objective_plot = optimization_trace_single_method(
y=np.maximum.accumulate(best_objectives, axis=1),
title="Model performance vs. # of iterations",
ylabel="Accuracy",
)
render(best_objective_plot)