Python Deep Learning 简明教程

Python Deep Learning - Fundamentals

在本章中,我们将深入了解 Python 深度学习的基础知识。

In this chapter, we will look into the fundamentals of Python Deep Learning.

Deep learning models/algorithms

现在,让我们了解不同的深度学习模型/算法。

Let us now learn about the different deep learning models/ algorithms.

深度学习中一些流行的模型如下 −

Some of the popular models within deep learning are as follows −

  1. Convolutional neural networks

  2. Recurrent neural networks

  3. Deep belief networks

  4. Generative adversarial networks

  5. Auto-encoders and so on

输入和输出表示为向量或张量。例如,神经网络可能具有输入,其中图像中的各个像素 RGB 值表示为向量。

The inputs and outputs are represented as vectors or tensors. For example, a neural network may have the inputs where individual pixel RGB values in an image are represented as vectors.

位于输入层和输出层之间的神经元层称为隐藏层。当神经网络尝试解决问题时,这是大部分工作发生的地方。仔细观察隐藏层可以揭示网络已学会从数据中提取的特征。

The layers of neurons that lie between the input layer and the output layer are called hidden layers. This is where most of the work happens when the neural net tries to solve problems. Taking a closer look at the hidden layers can reveal a lot about the features the network has learned to extract from the data.

通过选择哪些神经元连接到下一层中的其他神经元,可以形成神经网络的不同架构。

Different architectures of neural networks are formed by choosing which neurons to connect to the other neurons in the next layer.

Pseudocode for calculating output

以下是计算 Forward-propagating Neural Network 输出的伪代码 −

Following is the pseudocode for calculating output of Forward-propagating Neural Network

  1. # node[] := array of topologically sorted nodes

  2. # An edge from a to b means a is to the left of b

  3. # If the Neural Network has R inputs and S outputs,

  4. # then first R nodes are input nodes and last S nodes are output nodes.

  5. # incoming[x] := nodes connected to node x

  6. # weight[x] := weights of incoming edges to x

对于每个神经元 x,从左至右−

For each neuron x, from left to right −

  1. if x ⇐ R: do nothing # its an input node

  2. inputs[x] = [output[i] for i in incoming[x]]

  3. weighted_sum = dot_product(weights[x], inputs[x])

  4. output[x] = Activation_function(weighted_sum)