Machine Learning 简明教程

Machine Learning - Bias and Variance

偏差和方差是机器学习中两个重要的概念,它们描述了模型预测中错误的来源。偏差是指由于过度简化输入特征和输出变量之间的潜在关系而导致的错误,而方差是指由于对训练数据中的波动过于敏感而导致的错误。

Bias and variance are two important concepts in machine learning that describe the sources of error in a model’s predictions. Bias refers to the error that results from oversimplifying the underlying relationship between the input features and the output variable, while variance refers to the error that results from being too sensitive to fluctuations in the training data.

在机器学习中,我们会努力最小化偏差和方差,以构建一个能够准确预测看不见的数据的模型。偏差较大的模型可能过于简单并欠拟合训练数据,而方差较大的模型可能过度拟合训练数据并且无法泛化到新数据。

In machine learning, we strive to minimize both bias and variance in order to build a model that can accurately predict on unseen data. A model with high bias may be too simplistic and underfit the training data, while a model with high variance may overfit the training data and fail to generalize to new data.

Example

下面是 Python 中的一个实现示例,演示了如何使用波士顿住房数据集分析偏差和方差 −

Below is an implementation example in Python that illustrates how bias and variance can be analyzed using the Boston Housing dataset −

import numpy as np
import pandas as pd
from sklearn.datasets import load_boston

boston = load_boston()
X = boston.data
y = boston.target
from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

lr = LinearRegression()
lr.fit(X_train, y_train)

train_preds = lr.predict(X_train)
train_mse = mean_squared_error(y_train, train_preds)
print("Training MSE:", train_mse)

test_preds = lr.predict(X_test)
test_mse = mean_squared_error(y_test, test_preds)
print("Testing MSE:", test_mse)

Output

输出显示了线性回归模型的训练和测试均方误差 (MSE)。训练 MSE 为 21.64,测试 MSE 为 24.29,表示该模型具有中等水平的偏差和方差。

The output shows the training and testing mean squared errors (MSE) of the linear regression model. The training MSE is 21.64 and the testing MSE is 24.29, indicating that the model has a moderate level of bias and variance.

Training MSE: 21.641412753226312
Testing MSE: 24.291119474973456

Reducing Bias and Variance

为了减少偏差,我们可以使用更多复杂的模型,它们可以捕捉数据中的非线性关系。

To reduce bias, we can use more complex models that can capture non-linear relationships in the data.

Example

我们尝试一个多项式回归模型 −

Let’s try a polynomial regression model −

from sklearn.preprocessing import PolynomialFeatures

poly = PolynomialFeatures(degree=2)
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.transform(X_test)

pr = LinearRegression()
pr.fit(X_train_poly, y_train)

train_preds = pr.predict(X_train_poly)
train_mse = mean_squared_error(y_train, train_preds)
print("Training MSE:", train_mse)

test_preds = pr.predict(X_test_poly)
test_mse = mean_squared_error(y_test, test_preds)
print("Testing MSE:", test_mse)

输出显示了 degree=2 的多项式回归模型的训练和测试 MSE。训练 MSE 为 5.31,测试 MSE 为 14.18,表示与线性回归模型相比,该模型具有较低的偏差但较高的方差。

The output shows the training and testing MSE of the polynomial regression model with degree=2. The training MSE is 5.31 and the testing MSE is 14.18, indicating that the model has a lower bias but higher variance compared to the linear regression model.

Training MSE: 5.31446956670908
Testing MSE: 14.183558207567042

Example

为了减少方差,我们可以使用正则化技术,如 ridge regressionlasso regression 。在下面的示例中,我们将使用岭回归 −

To reduce variance, we can use regularization techniques such as ridge regression or lasso regression. In the following example, we will be using ridge regression −

from sklearn.linear_model import Ridge

ridge = Ridge(alpha=1)
ridge.fit(X_train_poly, y_train)

train_preds = ridge.predict(X_train_poly)
train_mse = mean_squared_error(y_train, train_preds)
print("Training MSE:", train_mse)

test_preds = ridge.predict(X_test_poly)
test_mse = mean_squared_error(y_test, test_preds)
print("Testing MSE:", test_mse)

输出显示了 alpha=1 的岭回归模型的训练和测试 MSE。与多项式回归模型相比,训练 MSE 为 9.03,测试 MSE 为 13.88,表明该模型具有较低的方差但稍高的偏差。

The output shows the training and testing MSE of the ridge regression model with alpha=1. The training MSE is 9.03 and the testing MSE is 13.88 compared to the polynomial regression model, indicating that the model has a lower variance but slightly higher bias.

Training MSE: 9.03220937860839
Testing MSE: 13.882093755326755

Example

我们可以进一步调整超参数 alpha,以找到偏差和方差之间的最优平衡。我们来看一个例子 −

We can further tune the hyperparameter alpha to find the optimal balance between bias and variance. Let’s see an example −

from sklearn.model_selection import GridSearchCV

param_grid = {'alpha': np.logspace(-3, 3, 7)}
ridge_cv = GridSearchCV(Ridge(), param_grid, cv=5)
ridge_cv.fit(X_train_poly, y_train)

train_preds = ridge_cv.predict(X_train_poly)
train_mse = mean_squared_error(y_train, train_preds)
print("Training MSE:", train_mse)

test_preds = ridge_cv.predict(X_test_poly)
test_mse = mean_squared_error(y_test, test_preds)
print("Testing MSE:", test_mse)

输出显示了具有最佳 alpha 值的岭回归模型的训练和测试 MSE。

The output shows the training and testing MSE of the ridge regression model with the optimal alpha value.

Training MSE: 8.326082686584716
Testing MSE: 12.873907256619141

训练 MSE 为 8.32,测试 MSE 为 12.87,表明该模型在偏差和方差之间取得了良好的平衡。

The training MSE is 8.32 and the testing MSE is 12.87, indicating that the model has a good balance between bias and variance.