Welcome to Ax Tutorials
Here you can learn about the structure and applications of Ax from examples.
Our 3 API tutorials: Loop, Service, and Developer — are a good place to start. Each tutorial showcases optimization on a constrained Hartmann6 problem, with the Loop API being the simplest to use and the Developer API being the most customizable.
Further, we explore the different components available in Ax in more detail. The components explored below serve to set up an experiment, visualize its results, configure an optimization algorithm, run an entire experiment in a managed closed loop, and combine BoTorch components in Ax in a modular way.
- Visualizations illustrates the different plots available to view and understand your results.
- GenerationStrategy steps through setting up a way to specify the optimization algorithm (or multiple). A
GenerationStrategy
is an important component of Service API and theScheduler
.
- Scheduler demonstrates an example of a managed and configurable closed-loop optimization, conducted in an asyncronous fashion.
Scheduler
is a manager abstraction in Ax that deploys trials, polls them, and uses their results to produce more trials.
- Modular
BoTorchModel
walks though a new beta-feature — an improved interface between Ax and BoTorch — which allows for combining arbitrary BoTorch components likeAcquisitionFunction
,Model
,AcquisitionObjective
etc. into a singleModel
in Ax.
Our other Bayesian Optimization tutorials include:
- Hyperparameter Optimization for PyTorch provides an example of hyperparameter optimization with Ax and integration with an external ML library.
- Hyperparameter Optimization on SLURM via SubmitIt shows how to use the AxClient to schedule jobs and tune hyperparameters on a Slurm cluster.
- Hyperparameter Optimization via Raytune provides an example of parallelized hyperparameter optimization using Ax + Raytune.
- Multi-Task Modeling illustrates multi-task Bayesian Optimization on a constrained synthetic Hartmann6 problem.
- Multi-Objective Optimization demonstrates Multi-Objective Bayesian Optimization on a synthetic Branin-Currin test function.
- Trial-Level Early Stopping shows how to use trial-level early stopping on an ML training job to save resources and iterate faster.
For experiments done in a real-life setting, refer to our field experiments tutorials:
- Bandit Optimization shows how Thompson Sampling can be used to intelligently reallocate resources to well-performing configurations in real-time.
- Human-in-the-Loop Optimization walks through manually influencing the course of optimization in real-time.