Tensorflow 简明教程
TensorFlow - Optimizers
优化器是扩展类,其中包括用于训练特定模型的附加信息。优化器类使用给定的参数进行初始化,但重要的是要记住不需要张量。优化器用于提高训练特定模型的速度和性能。
Optimizers are the extended class, which include added information to train a specific model. The optimizer class is initialized with given parameters but it is important to remember that no Tensor is needed. The optimizers are used for improving speed and performance for training a specific model.
TensorFlow 的基本优化器是-
The basic optimizer of TensorFlow is −
tf.train.Optimizer
此类在 tensorflow/python/training/optimizer.py 的指定路径中定义。
This class is defined in the specified path of tensorflow/python/training/optimizer.py.
以下是 Tensorflow 中的一些优化器-
Following are some optimizers in Tensorflow −
-
Stochastic Gradient descent
-
Stochastic Gradient descent with gradient clipping
-
Momentum
-
Nesterov momentum
-
Adagrad
-
Adadelta
-
RMSProp
-
Adam
-
Adamax
-
SMORMS3
我们将专注于随机梯度下降。为其创建优化器的说明如下:
We will focus on the Stochastic Gradient descent. The illustration for creating optimizer for the same is mentioned below −
def sgd(cost, params, lr = np.float32(0.01)):
g_params = tf.gradients(cost, params)
updates = []
for param, g_param in zip(params, g_params):
updates.append(param.assign(param - lr*g_param))
return updates
基本参数在特定函数内定义。在随后的章节中,我们将专注于具有优化器实现的梯度下降优化。
The basic parameters are defined within the specific function. In our subsequent chapter, we will focus on Gradient Descent Optimization with implementation of optimizers.