All Products
Search
Document Center

Platform For AI:AutoML

Last Updated:Mar 27, 2025

Platform for AI (PAI) provides AutoML to help you search for optimal hyperparameter combinations based on specific policies. You can use AutoML to improve the efficiency of model tuning.

Concepts

  • Hyperparameter: external parameters used to train models. You need to configure hyperparameters before you start to train a model. After you configure the hyperparameters of a model, the hyperparameters remain unchanged during model training. In contrast, the parameters of a model are constantly updated and optimized during the machine learning process.

  • Hyperparameter optimization (HPO): a process for manual or automatic hyperparameter optimization. In this topic, HPO refers to the service provided by AutoML that automatically searches and fine-tunes hyperparameters. HPO can help you obtain optimal hyperparameters and improve the model performance in an efficient manner. HPO also allows algorithm developers to focus on modeling.

  • Search space: a range of possible values of hyperparameter combinations. AutoML searches for the optimal hyperparameter combination within this range.

  • Experiment: an experiment created to search for the optimal hyperparameter combination of a model in the search space.

  • Trial: Each trial involves model training, generation, and evaluation by using a specific hyperparameter combination. An experiment runs multiple trials and compares the results of the trials to find the optimal hyperparameter combination. For more information, see How AutoML works.

  • Job type: the resources and environment that are used for model training in a trial. Valid values: Deep Learning Containers (DLC) and MaxCompute.

Background information

In machine learning, hyperparameters are a set of parameters that are used to train models. You must configure the hyperparameters before machine learning. After you configure the hyperparameters of a model, the hyperparameters remain unchanged during model training.

HPO is the process of finding the optimal hyperparameters. If a model has multiple hyperparameters, the hyperparameters are considered as multi-dimensional vector. HPO finds the specific vector value that provides the optimal model performance, such as the minimum value of the loss function, across all the ranges of the vector values of this model.

For example, a model has two hyperparameters A and B. Possible values for A are a, b, and c, and possible values for B are d and e. In this case, the model has six hyperparameter combinations. HPO finds the specific combination of A and B that allows the model to achieve the optimal performance. To obtain the optimal hyperparameter combination, separately use the six combinations of A and B for model training based on the same dataset. Then, compare the model performance of the combinations.

HPO in AutoML

Hyperparameter fine-tuning is complex because the process involves a large amount of model hyperparameters and various data types and value ranges of hyperparameters. For example, a model has multiple hyperparameters, in which some hyperparameters are of the integer type and some parameters are of the floating-point type. In this case, manual hyperparameter tuning requires a large amount of computing resources and an automated system is required to complete the task. The HPO feature of AutoML can help you automatically fine-tune various hyperparameters.

You can use AutoML to fine-tune hyperparameters in a simple, efficient, and accurate manner. The following list describes the benefits of AutoML:

  • Simplified fine-tuning: AutoML greatly simplifies the process of hyperparameter fine-tuning and saves time by using automated tools.

  • Improved model quality: AutoML integrates multiple algorithms of PAI to quickly find the optimal hyperparameter combination. This helps you train models in a more accurate and efficient manner.

  • Reduced computing resources: AutoML evaluates the model performance during the training to determine whether to terminate the current training and evaluate another hyperparameter combination. AutoML allows you to obtain the optimal hyperparameter combination without the need to evaluate all combinations. This helps you save computing resources.

  • Flexible use of computing power: AutoML allows you to use DLC and MaxCompute resources in a convenient and flexible manner.

Scenarios

AutoML is suitable for all hyperparameter fine-tuning scenarios of machine learning. The following list provides common scenarios of machine learning:

  • Binary classification tasks, such as determining whether a user is a paying user.

  • Regression tasks, such as estimating the payment amount a user makes within seven days.

  • Clustering tasks, such as determining the number of branches of a cosmetic brand in a city.

  • Recommendation tasks, such as fine-tuning ranking and retrieval models, or improving the area under curve (AUC) metric.

  • Deep learning tasks, such as improving the accuracy of image multi-classification and video multi-classification.

Reference

  • How AutoML works

    (Recommended) This topic describes how AutoML works and the relationship between experiments, trials, and training tasks. This helps you become familiar with the concepts and facilitate configuration.

  • Create an experiment

    This topic describes how to create an experiment in the PAI console and how to configure key parameters.

  • AutoML use cases

    This topic provides use cases of how to use AutoML to fine-tune hyperparameters.