Machine Learning 简明教程
Machine Learning - Perceptron
感知机是最古老和最简单的神经网络架构之一。它由 Frank Rosenblatt 在 20 世纪 50 年代发明。感知机算法是一个线性分类器,它将输入分类为两个可能的输出类别之一。它是一种监督学习,通过提供标记训练数据对模型进行训练。感知机算法基于一个阈值函数,该函数对输入进行加权求和,并应用一个阈值来生成二进制输出。
Perceptron is one of the oldest and simplest neural network architectures. It was invented in the 1950s by Frank Rosenblatt. The Perceptron algorithm is a linear classifier that classifies input into one of two possible output categories. It is a type of supervised learning that trains the model by providing labeled training data. The Perceptron algorithm is based on a threshold function that takes the weighted sum of inputs and applies a threshold to generate a binary output.
Architecture of Perceptron
感知机的单层包括一个输入层、一个权重层和一个输出层。输入层中的每个节点都通过一个权重连接到权重层中的每个节点,并且每个连接都被赋予一个权重。权重层中的每个节点计算输入的加权和,并应用一个阈值函数来生成输出。
A single layer of Perceptron consists of an input layer, a weight layer, and an output layer. Each node in the input layer is connected to each node in the weight layer with a weight assigned to each connection. Each node in the weight layer computes a weighted sum of inputs and applies a threshold function to generate the output.
感知机中的阈值函数是 Heaviside 阶跃函数,如果输入大于或等于零,则返回 1 的二进制值,否则返回 0。权重层中每个节点的输出由下式决定:
The threshold function in Perceptron is the Heaviside step function, which returns a binary value of 1 if the input is greater than or equal to zero, and 0 otherwise. The output of each node in the weight layer is determined by −
y={1;ifw0w1x1+w2x2⋅⋅⋅+wnxn:>=00;otherwise
y=\left\{\begin{matrix} 1; & if\: w_{0}w_{1}x_{1}+w_{2}x_{2}\cdot \cdot \cdot +w_{n}x_{n}\: > = 0 \\ 0; & otherwise \\ \end{matrix}\right.
其中“y”是输出,x1、x2、…、xn 是输入特征;w0、w1、w2、…、wn 是相应的权重,>= 0 表示 Heaviside 阶跃函数。
Where "y" is the output,x1,x2, …,xn are the input features; and w0, w1, w2, …, wn are the corresponding weights, and >= 0 indicates the Heaviside step function.
Training of Perceptron
感知机算法的训练过程包括迭代更新权重,直至模型收敛到一组可以正确分类所有训练样本的权重。最初,权重被设置为随机值。对于每个训练样本,预测输出与实际输出进行比较,并相应地更新权重以最小化误差。
The training process of the Perceptron algorithm involves iteratively updating the weights until the model converges to a set of weights that can correctly classify all training examples. Initially, the weights are set to random values. For each training example, the predicted output is compared to the actual output, and the weights are updated accordingly to minimize the error.
感知机中的权重更新规则如下:
The weight update rule in Perceptron is as follows −
wi=wi+α×(y−y′)×xi
w_{i}=w_{i}+\alpha \times \left ( y-y' \right )\times x_{i}
其中 Wi 是第 i 个特征的权重,α 是学习速率,y 是实际输出,y′ 是预测输出,xi 是第 i 个输入特征。
Where Wi is the weight of the i-th feature,$\alpha$ is the learning rate,y is the actual output, y′ is the predicted output, and xi is the i-th input feature.
Implementation of Perceptron in Python
感知机算法使用 scikit-learn 库在 Python 中实现。scikit-learn 库提供一个感知机类,可用于二元分类问题。
The Perceptron algorithm is implemented in Python using the scikit-learn library. The scikit-learn library provides a Perceptron class that can be used for binary classification problems.
这是使用 scikit-learn 在 Python 中实现 Perceptron 算法的一个示例 −
Here is an example of implementing the Perceptron algorithm in Python using scikit-learn −
Example
from sklearn.linear_model import Perceptron
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Load the iris dataset
iris = load_iris()
# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.3, random_state=0)
# Create a Perceptron object with a learning rate of 0.1
perceptron = Perceptron(alpha=0.1)
# Train the Perceptron on the training data
perceptron.fit(X_train, y_train)
# Use the trained Perceptron to make predictions on the testing data
y_pred = perceptron.predict(X_test)
# Evaluate the accuracy of the Perceptron
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
执行此代码时,将生成以下输出 −
When you execute this code, it will produce the following output −
Accuracy: 0.8
一旦训练了感知器,就可以用它对新的输入数据进行预测。给定一组输入值,感知器计算输入的加权和,并将激活函数应用于和以获得输出值。然后,可以将此输出值解释为对应输入的预测。
Once the perceptron is trained, it can be used to make predictions on new input data. Given a set of input values, the perceptron computes a weighted sum of the inputs and applies an activation function to the sum to obtain the output value. This output value can then be interpreted as a prediction for the corresponding input.
Role of Step Functions in the Training of Perceptrons
感知器中使用的激活函数可能有所不同,但一个常见的选择是阶跃函数。如果输入为正,则阶跃函数返回 1;如果输入为负或零,则返回 0。此函数非常有用,因为它提供了二进制输出,可以将其解释为二进制分类问题的预测。
The activation function used in a perceptron can vary, but a common choice is the step function. The step function returns 1 if the input is positive or 0 if it is negative or zero. This function is useful because it provides a binary output, which can be interpreted as a prediction for a binary classification problem.
以下是用阶跃函数作为激活函数在 Python 中实现感知器的一个示例 −
Here is an example implementation of a perceptron in Python using the step function as the activation function −
import numpy as np
class Perceptron:
def __init__(self, learning_rate=0.1, epochs=100):
self.learning_rate = learning_rate
self.epochs = epochs
self.weights = None
self.bias = None
def step_function(self, x):
return np.where(x >= 0, 1, 0)
def fit(self, X, y):
n_samples, n_features = X.shape
# initialize weights and bias to 0
self.weights = np.zeros(n_features)
self.bias = 0
# iterate over epochs and update weights and bias
for _ in range(self.epochs):
for i in range(n_samples):
linear_output = np.dot(self.weights, X[i]) + self.bias
y_pred = self.step_function(linear_output)
# update weights and bias based on error
update = self.learning_rate * (y[i] - y_pred)
self.weights += update * X[i]
self.bias += update
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
y_pred = self.step_function(linear_output)
return y_pred
在此实现中,Perceptron 类采用两个参数:learning_rate 和 epochs。fit 方法基于输入数据 X 和对应的目标值 y 训练感知器。predict 方法采用输入数据数组并返回预测的输出值。
In this implementation, the Perceptron class takes two parameters: learning_rate and epochs. The fit method trains the perceptron on the input data X and the corresponding target values y. The predict method takes an input data array and returns the predicted output values.
要使用此实现,我们可以创建 Perceptron 类的实例并调用 fit 方法来训练模型 −
To use this implementation, we can create an instance of the Perceptron class and call the fit method to train the model −
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1])
perceptron = Perceptron(learning_rate=0.1, epochs=10)
perceptron.fit(X, y)
一旦训练了模型,我们就可以使用 predict 方法对新的输入数据进行预测 −
Once the model is trained, we can make predictions on new input data using the predict method −
test_data = np.array([[1, 1], [0, 1]])
predictions = perceptron.predict(test_data)
print(predictions)
此代码的输出是 [1, 0],这是输入数据 [[1, 1], [0, 1]] 的预测值。
The output of this code is [1, 0], which are the predicted values for the input data [[1, 1], [0, 1]].