Tensorflow 简明教程
Recommendations for Neural Network Training
在本章中,我们将了解 TensorFlow 框架可以实现的神经网络训练的各个方面。
In this chapter, we will understand the various aspects of neural network training which can be implemented using TensorFlow framework.
以下是可以评估的十条建议−
Following are the ten recommendations, which can be evaluated −
Back Propagation
反向传播是一种计算偏导数的简单方法,包括最适合神经网络的基本构成形式。
Back propagation is a simple method to compute partial derivatives, which includes the basic form of composition best suitable for neural nets.
Stochastic Gradient Descent
在随机梯度下降中, batch 是示例的总数,用户使用该示例在单个迭代中计算梯度。到目前为止,假定批次是整个数据集。最好的说明是在 Google 规模上工作;数据集通常包含数十亿甚至数百亿个示例。
In stochastic gradient descent, a batch is the total number of examples, which a user uses to calculate the gradient in a single iteration. So far, it is assumed that the batch has been the entire data set. The best illustration is working at Google scale; data sets often contain billions or even hundreds of billions of examples.
Learning Rate Decay
采用学习率是梯度下降优化最重要的特性之一。这对于 TensorFlow 实现至关重要。
Adapting the learning rate is one of the most important features of gradient descent optimization. This is crucial to TensorFlow implementation.
Dropout
具有大量参数的深度神经网络构成了强大的机器学习系统。然而,过拟合是此类网络中的一个严重问题。
Deep neural nets with a large number of parameters form powerful machine learning systems. However, over fitting is a serious problem in such networks.