Machine Learning 简明教程

Machine Learning - Gradient Boosting

梯度提升机 (GBM) 是广泛用于构建预测模型的强大机器学习技术。它是一种集成方法,它结合了多个较弱模型的预测以创建更强大且更准确的模型。

Gradient Boosting Machines (GBM) is a powerful machine learning technique that is widely used for building predictive models. It is a type of ensemble method that combines the predictions of multiple weaker models to create a stronger and more accurate model.

GBM 是许多应用(包括回归、分类和排名问题)的流行选择。让我们了解 GBM 的工作原理以及如何在机器学习中使用它。

GBM is a popular choice for a wide range of applications, including regression, classification, and ranking problems. Let’s understand the workings of GBM and how it can be used in machine learning.

What is a Gradient Boosting Machine (GBM)?

GBM 是一种迭代机器学习算法,它结合了多个决策树的预测来做出最终预测。

GBM is an iterative machine learning algorithm that combines the predictions of multiple decision trees to make a final prediction.

该算法通过训练一系列决策树来工作,每棵决策树都旨在纠正前一棵决策树的错误。

The algorithm works by training a sequence of decision trees, each of which is designed to correct the errors of the previous tree.

在每次迭代中,该算法都会识别数据集中最难预测的样本并专注于提高模型在这些样本上的性能。

In each iteration, the algorithm identifies the samples in the dataset that are most difficult to predict and focuses on improving the model’s performance on these samples.

这是通过拟合一棵旨在减少困难样本错误的新决策树来实现的。该过程将持续进行,直到达到指定的停止条件,例如达到一定的准确性水平或达到最大迭代次数。

This is achieved by fitting a new decision tree that is optimized to reduce the errors on the difficult samples. The process continues until a specified stopping criteria is met, such as reaching a certain level of accuracy or the maximum number of iterations.

How Does a Gradient Boosting Machine Work?

训练 GBM 模型涉及的基本步骤如下 −

The basic steps involved in training a GBM model are as follows −

  1. Initialize the model − The algorithm starts by creating a simple model, such as a single decision tree, to serve as the initial model.

  2. Calculate residuals − The initial model is used to make predictions on the training data, and the residuals are calculated as the differences between the predicted values and the actual values.

  3. Train a new model − A new decision tree is trained on the residuals, with the goal of minimizing the errors on the difficult samples.

  4. Update the model − The predictions of the new model are added to the predictions of the previous model, and the residuals are recalculated based on the updated predictions.

  5. Repeat − Steps 3-4 are repeated until a specified stopping criteria is met.

GBM 可以通过引入正则化技术来进一步提升,例如 L1 和 L2 正则化,以防止过拟合。此外,GBM 可以扩展为处理类别式变量、缺失数据和多类分类问题。

GBM can be further improved by introducing regularization techniques, such as L1 and L2 regularization, to prevent overfitting. Additionally, GBM can be extended to handle categorical variables, missing data, and multi-class classification problems.

Example

以下是一个使用 Sklearn 乳腺癌数据集实现 GBM 的示例 −

Here is an example of implementing GBM using the Sklearn breast cancer dataset −

from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score

# Load the breast cancer dataset
data = load_breast_cancer()
X = data.data
y = data.target

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train the model using GradientBoostingClassifier
model = GradientBoostingClassifier(n_estimators=100, max_depth=3, learning_rate=0.1)
model.fit(X_train, y_train)

# Make predictions on the testing set
y_pred = model.predict(X_test)

# Evaluate the model's accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)

Output

在此示例中,我们使用 Sklearn 的 load_breast_cancer 函数加载乳腺癌数据集,并将其分为训练集和测试集。然后,我们使用 GradientBoostingClassifier 定义 GBM 模型的参数,包括估计器数量(即决策树数量)、每个决策树的最大深度和学习率。

In this example, we load the breast cancer dataset using Sklearn’s load_breast_cancer function and split it into training and testing sets. We then define the parameters for the GBM model using GradientBoostingClassifier, including the number of estimators (i.e., the number of decision trees), the maximum depth of each decision tree, and the learning rate.

我们使用 fit 方法训练 GBM 模型,并使用 predict 方法对测试集做出预测。最后,我们使用 Sklearn 的 metrics 模块中的 accuracy_score 函数评估模型的准确度。

We train the GBM model using the fit method and make predictions on the testing set using the predict method. Finally, we evaluate the model’s accuracy using the accuracy_score function from Sklearn’s metrics module.

执行此代码时,将生成以下输出 −

When you execute this code, it will produce the following output −

Accuracy: 0.956140350877193

Advantages of Using Gradient Boosting Machines

在机器学习中使用 GBM 有几个优点 −

There are several advantages of using GBM in machine learning −

  1. High accuracy − GBM is known for its high accuracy, as it combines the predictions of multiple weaker models to create a stronger and more accurate model.

  2. Robustness − GBM is robust to outliers and noisy data, as it focuses on improving the model’s performance on the most difficult samples.

  3. Flexibility − GBM can be used for a wide range of applications, including regression, classification, and ranking problems.

  4. Interpretability − GBM provides insights into the importance of different features in making predictions, which can be useful for understanding the underlying factors driving the predictions.

  5. Scalability − GBM can handle large datasets and can be parallelized to accelerate the training process.

Limitations of Gradient Boosting Machines

在机器学习中使用 GBM 也有某些限制 −

There are also some limitations to using GBM in machine learning −

  1. Training time − GBM can be computationally expensive and may require a significant amount of training time, especially when working with large datasets.

  2. Hyperparameter tuning − GBM requires careful tuning of hyperparameters, such as the learning rate, number of trees, and maximum depth, to achieve optimal performance.

  3. Black box model − GBM can be difficult to interpret, as the final model is a combination of multiple decision trees and may not provide clear insights into the underlying factors driving the predictions.