This tutorial walks through using Ax to tune two hyperparameters (learning rate and momentum) for a PyTorch CNN on the MNIST dataset trained using SGD with momentum.
import torch
import numpy as np
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.managed_loop import optimize
from ax.utils.notebook.plotting import render, init_notebook_plotting
from ax.utils.tutorials.cnn_utils import load_mnist, train, evaluate, CNN
init_notebook_plotting()
[INFO 12-29 21:31:55] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
torch.manual_seed(12345)
dtype = torch.float
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
First, we need to load the MNIST data and partition it into training, validation, and test sets.
Note: this will download the dataset if necessary.
BATCH_SIZE = 512
train_loader, valid_loader, test_loader = load_mnist(batch_size=BATCH_SIZE)
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz
0%| | 0/9912422 [00:00<?, ?it/s]
Extracting ./data/MNIST/raw/train-images-idx3-ubyte.gz to ./data/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./data/MNIST/raw/train-labels-idx1-ubyte.gz
0%| | 0/28881 [00:00<?, ?it/s]
Extracting ./data/MNIST/raw/train-labels-idx1-ubyte.gz to ./data/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw/t10k-images-idx3-ubyte.gz
0%| | 0/1648877 [00:00<?, ?it/s]
Extracting ./data/MNIST/raw/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz
0%| | 0/4542 [00:00<?, ?it/s]
Extracting ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw
In this tutorial, we want to optimize classification accuracy on the validation set as a function of the learning rate and momentum. The function takes in a parameterization (set of parameter values), computes the classification accuracy, and returns a dictionary of metric name ('accuracy') to a tuple with the mean and standard error.
def train_evaluate(parameterization):
net = CNN()
net = train(net=net, train_loader=train_loader, parameters=parameterization, dtype=dtype, device=device)
return evaluate(
net=net,
data_loader=valid_loader,
dtype=dtype,
device=device,
)
Here, we set the bounds on the learning rate and momentum and set the parameter space for the learning rate to be on a log scale.
best_parameters, values, experiment, model = optimize(
parameters=[
{"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True},
{"name": "momentum", "type": "range", "bounds": [0.0, 1.0]},
],
evaluation_function=train_evaluate,
objective_name='accuracy',
)
[INFO 12-29 21:31:56] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter lr. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 12-29 21:31:56] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter momentum. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 12-29 21:31:56] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='lr', parameter_type=FLOAT, range=[1e-06, 0.4], log_scale=True), RangeParameter(name='momentum', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]). [INFO 12-29 21:31:56] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters. [INFO 12-29 21:31:56] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=2 num_trials=None use_batch_trials=False [INFO 12-29 21:31:56] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=5 [INFO 12-29 21:31:56] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=5 [INFO 12-29 21:31:56] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 5 trials, GPEI for subsequent trials]). Iterations after 5 will take longer to generate due to model-fitting. [INFO 12-29 21:31:56] ax.service.managed_loop: Started full optimization with 20 steps. [INFO 12-29 21:31:56] ax.service.managed_loop: Running optimization trial 1... /home/runner/work/Ax/Ax/ax/core/observation.py:274: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. [INFO 12-29 21:32:03] ax.service.managed_loop: Running optimization trial 2... [INFO 12-29 21:32:10] ax.service.managed_loop: Running optimization trial 3... [INFO 12-29 21:32:16] ax.service.managed_loop: Running optimization trial 4... [INFO 12-29 21:32:23] ax.service.managed_loop: Running optimization trial 5... [INFO 12-29 21:32:29] ax.service.managed_loop: Running optimization trial 6... [INFO 12-29 21:32:36] ax.service.managed_loop: Running optimization trial 7... [INFO 12-29 21:32:43] ax.service.managed_loop: Running optimization trial 8... [INFO 12-29 21:32:50] ax.service.managed_loop: Running optimization trial 9... [INFO 12-29 21:32:57] ax.service.managed_loop: Running optimization trial 10... [INFO 12-29 21:33:04] ax.service.managed_loop: Running optimization trial 11... [INFO 12-29 21:33:11] ax.service.managed_loop: Running optimization trial 12... [INFO 12-29 21:33:18] ax.service.managed_loop: Running optimization trial 13... [INFO 12-29 21:33:25] ax.service.managed_loop: Running optimization trial 14... [INFO 12-29 21:33:33] ax.service.managed_loop: Running optimization trial 15... [INFO 12-29 21:33:40] ax.service.managed_loop: Running optimization trial 16... [INFO 12-29 21:33:47] ax.service.managed_loop: Running optimization trial 17... [INFO 12-29 21:33:55] ax.service.managed_loop: Running optimization trial 18... [INFO 12-29 21:34:02] ax.service.managed_loop: Running optimization trial 19... [INFO 12-29 21:34:09] ax.service.managed_loop: Running optimization trial 20... [INFO 12-29 21:34:10] ax.modelbridge.base: Untransformed parameter 0.40000000000000013 greater than upper bound 0.4, clamping
We can introspect the optimal parameters and their outcomes:
best_parameters
{'lr': 0.00035571923793255284, 'momentum': 0.31631823679222393}
means, covariances = values
means, covariances
({'accuracy': 0.9255559357546199}, {'accuracy': {'accuracy': 0.0003665519994132127}})
Contour plot showing classification accuracy as a function of the two hyperparameters.
The black squares show points that we have actually run, notice how they are clustered in the optimal region.
render(plot_contour(model=model, param_x='lr', param_y='momentum', metric_name='accuracy'))
Show the model accuracy improving as we identify better hyperparameters.
# `plot_single_method` expects a 2-d array of means, because it expects to average means from multiple
# optimization runs, so we wrap out best objectives array in another array.
best_objectives = np.array([[trial.objective_mean*100 for trial in experiment.trials.values()]])
best_objective_plot = optimization_trace_single_method(
y=np.maximum.accumulate(best_objectives, axis=1),
title="Model performance vs. # of iterations",
ylabel="Classification Accuracy, %",
)
render(best_objective_plot)
Note that the resulting accuracy on the test set might not be exactly the same as the maximum accuracy achieved on the evaluation set throughout optimization.
data = experiment.fetch_data()
df = data.df
best_arm_name = df.arm_name[df['mean'] == df['mean'].max()].values[0]
best_arm = experiment.arms_by_name[best_arm_name]
best_arm
Arm(name='4_0', parameters={'lr': 0.0003995053549220097, 'momentum': 0.4136803150177002})
combined_train_valid_set = torch.utils.data.ConcatDataset([
train_loader.dataset.dataset,
valid_loader.dataset.dataset,
])
combined_train_valid_loader = torch.utils.data.DataLoader(
combined_train_valid_set,
batch_size=BATCH_SIZE,
shuffle=True,
)
net = train(
net=CNN(),
train_loader=combined_train_valid_loader,
parameters=best_arm.parameters,
dtype=dtype,
device=device,
)
test_accuracy = evaluate(
net=net,
data_loader=test_loader,
dtype=dtype,
device=device,
)
print(f"Classification Accuracy (test set): {round(test_accuracy*100, 2)}%")
Classification Accuracy (test set): 97.72%
Total runtime of script: 2 minutes, 53.77 seconds.