Machine Learning 简明教程

Machine Learning - Principal Component Analysis

主成分分析 (PCA) 是机器学习中一种流行的无监督降维技术,用于将高维数据转换为低维表示。PCA 用于通过发现变量之间的底层关系来识别数据中的模式和结构。它通常用于图像处理、数据压缩和数据可视化等应用。

Principal Component Analysis (PCA) is a popular unsupervised dimensionality reduction technique in machine learning used to transform high-dimensional data into a lower-dimensional representation. PCA is used to identify patterns and structure in data by discovering the underlying relationships between variables. It is commonly used in applications such as image processing, data compression, and data visualization.

PCA 通过识别数据的主成分 (PC) 来工作,这些是原始变量的线性组合,它们捕捉了数据中的最大变化。第一个主成分占数据中最大的方差,其次是第二个主成分,依此类推。通过仅将数据维度降低到最显着的主成分,PCA 可以简化问题并提高下游机器学习算法的计算效率。

PCA works by identifying the principal components (PCs) of the data, which are linear combinations of the original variables that capture the most variation in the data. The first principal component accounts for the most variance in the data, followed by the second principal component, and so on. By reducing the dimensionality of the data to only the most significant PCs, PCA can simplify the problem and improve the computational efficiency of downstream machine learning algorithms.

PCA 的步骤如下 −

The steps involved in PCA are as follows −

  1. Standardize the data − PCA requires that the data be standardized to have zero mean and unit variance.

  2. Compute the covariance matrix − PCA computes the covariance matrix of the standardized data.

  3. Compute the eigenvectors and eigenvalues of the covariance matrix − PCA then computes the eigenvectors and eigenvalues of the covariance matrix.

  4. Select the principal components − PCA selects the principal components based on their corresponding eigenvalues, which indicate the amount of variation in the data explained by each component.

  5. Project the data onto the new feature space − PCA projects the data onto the new feature space defined by the selected principal components.

Example

以下是使用 scikit-learn 库在 Python 中实现 PCA 的示例 −

Here is an example of how you can implement PCA in Python using the scikit-learn library −

# Import the necessary libraries
import numpy as np
from sklearn.decomposition import PCA

# Load the iris dataset
from sklearn.datasets import load_iris
iris = load_iris()

# Define the predictor variables (X) and the target variable (y)
X = iris.data
y = iris.target

# Standardize the data
X_standardized = (X - np.mean(X, axis=0)) / np.std(X, axis=0)

# Create a PCA object and fit the data
pca = PCA(n_components=2)
X_pca = pca.fit_transform(X_standardized)

# Print the explained variance ratio of the selected components
print('Explained variance ratio:', pca.explained_variance_ratio_)

# Plot the transformed data
import matplotlib.pyplot as plt
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()

在此示例中,我们加载 iris 数据集,对数据进行标准化,并创建一个具有两个成分的 PCA 对象。然后,我们将 PCA 对象拟合到标准化数据并将数据转换为两个主成分。我们打印出所选成分的解释方差比率,并使用前两个主成分作为 x 和 y 轴绘制转换后的数据。

In this example, we load the iris dataset, standardize the data, and create a PCA object with two components. We then fit the PCA object to the standardized data and transform the data onto the two principal components. We print the explained variance ratio of the selected components and plot the transformed data using the first two principal components as the x and y axes.

Output

执行此代码时,它将生成以下绘图作为输出 −

When you execute this code, it will produce the following plot as the output −

principal component analysis
Explained variance ratio: [0.72962445 0.22850762]

Advantages of PCA

以下是使用主成分分析的优点 −

Following are the advantages of using Principal Component Analysis −

  1. Reduces dimensionality − PCA is particularly useful for high-dimensional datasets because it can reduce the number of features while retaining most of the original variability in the data.

  2. Removes correlated features − PCA can identify and remove correlated features, which can help improve the performance of machine learning models.

  3. Improves interpretability − The reduced number of features can make it easier to interpret and understand the data.

  4. Reduces overfitting − By reducing the dimensionality of the data, PCA can reduce overfitting and improve the generalizability of machine learning models.

  5. Speeds up computation − With fewer features, the computation required to train machine learning models is faster.

Disadvantages of PCA

以下是使用主成分分析的缺点−

Following are the disadvantages of using Principal Component Analysis −

  1. Information loss − PCA reduces the dimensionality of the data by projecting it onto a lower-dimensional space, which may lead to some loss of information.

  2. Can be sensitive to outliers − PCA can be sensitive to outliers, which can have a significant impact on the resulting principal components.

  3. Interpretability may be reduced − Although PCA can improve interpretability by reducing the number of features, the resulting principal components may be more difficult to interpret than the original features.

  4. Assumes linearity − PCA assumes that the relationships between the features are linear, which may not always be the case.

  5. Requires standardization − PCA requires that the data be standardized, which may not always be possible or appropriate.