Pybrain 简明教程
PyBrain - Layers
图层基本上是用于网络隐藏层的一组函数。
Layers are basically a set of functions that are used on hidden layers of a network.
我们将在本章中了解图层的以下详细信息:
We will go through the following details about layers in this chapter −
-
Understanding layer
-
Creating Layer using Pybrain
Understanding layers
我们之前已经看到使用图层的示例,如下所示:
We have seen examples earlier where we have used layers as follows −
-
TanhLayer
-
SoftmaxLayer
Example using TanhLayer
下面是一个我们使用TanhLayer构建网络的示例:
Below is one example where we have used TanhLayer for building a network −
testnetwork.py
testnetwork.py
from pybrain.tools.shortcuts import buildNetwork
from pybrain.structure import TanhLayer
from pybrain.datasets import SupervisedDataSet
from pybrain.supervised.trainers import BackpropTrainer
# Create a network with two inputs, three hidden, and one output
nn = buildNetwork(2, 3, 1, bias=True, hiddenclass=TanhLayer)
# Create a dataset that matches network input and output sizes:
norgate = SupervisedDataSet(2, 1)
# Create a dataset to be used for testing.
nortrain = SupervisedDataSet(2, 1)
# Add input and target values to dataset
# Values for NOR truth table
norgate.addSample((0, 0), (1,))
norgate.addSample((0, 1), (0,))
norgate.addSample((1, 0), (0,))
norgate.addSample((1, 1), (0,))
# Add input and target values to dataset
# Values for NOR truth table
nortrain.addSample((0, 0), (1,))
nortrain.addSample((0, 1), (0,))
nortrain.addSample((1, 0), (0,))
nortrain.addSample((1, 1), (0,))
#Training the network with dataset norgate.
trainer = BackpropTrainer(nn, norgate)
# will run the loop 1000 times to train it.
for epoch in range(1000):
trainer.train()
trainer.testOnData(dataset=nortrain, verbose = True)
Output
以上代码的输出如下 −
The output for the above code is as follows −
python testnetwork.py
python testnetwork.py
C:\pybrain\pybrain\src>python testnetwork.py
Testing on data:
('out: ', '[0.887 ]')
('correct:', '[1 ]')
error: 0.00637334
('out: ', '[0.149 ]')
('correct:', '[0 ]')
error: 0.01110338
('out: ', '[0.102 ]')
('correct:', '[0 ]')
error: 0.00522736
('out: ', '[-0.163]')
('correct:', '[0 ]')
error: 0.01328650
('All errors:', [0.006373344564625953, 0.01110338071737218,
0.005227359234093431, 0.01328649974219942])
('Average error:', 0.008997646064572746)
('Max error:', 0.01328649974219942, 'Median error:', 0.01110338071737218)
Example using SoftMaxLayer
下面是一个我们使用 SoftmaxLayer 构建网络的示例:
Below is one example where we have used SoftmaxLayer for building a network −
from pybrain.tools.shortcuts import buildNetwork
from pybrain.structure.modules import SoftmaxLayer
from pybrain.datasets import SupervisedDataSet
from pybrain.supervised.trainers import BackpropTrainer
# Create a network with two inputs, three hidden, and one output
nn = buildNetwork(2, 3, 1, bias=True, hiddenclass=SoftmaxLayer)
# Create a dataset that matches network input and output sizes:
norgate = SupervisedDataSet(2, 1)
# Create a dataset to be used for testing.
nortrain = SupervisedDataSet(2, 1)
# Add input and target values to dataset
# Values for NOR truth table
norgate.addSample((0, 0), (1,))
norgate.addSample((0, 1), (0,))
norgate.addSample((1, 0), (0,))
norgate.addSample((1, 1), (0,))
# Add input and target values to dataset
# Values for NOR truth table
nortrain.addSample((0, 0), (1,))
nortrain.addSample((0, 1), (0,))
nortrain.addSample((1, 0), (0,))
nortrain.addSample((1, 1), (0,))
#Training the network with dataset norgate.
trainer = BackpropTrainer(nn, norgate)
# will run the loop 1000 times to train it.
for epoch in range(1000):
trainer.train()
trainer.testOnData(dataset=nortrain, verbose = True)
Output
输出如下 −
The output is as follows −
C:\pybrain\pybrain\src>python example16.py
Testing on data:
('out: ', '[0.918 ]')
('correct:', '[1 ]')
error: 0.00333524
('out: ', '[0.082 ]')
('correct:', '[0 ]')
error: 0.00333484
('out: ', '[0.078 ]')
('correct:', '[0 ]')
error: 0.00303433
('out: ', '[-0.082]')
('correct:', '[0 ]')
error: 0.00340005
('All errors:', [0.0033352368788838365, 0.003334842961037291,
0.003034328685718761, 0.0034000458892589056])
('Average error:', 0.0032761136037246985)
('Max error:', 0.0034000458892589056, 'Median error:', 0.0033352368788838365)
Creating Layer in Pybrain
在 Pybrain 中,您可以按照如下方式创建自己的层:
In Pybrain, you can create your own layer as follows −
要创建层,您需要使用 NeuronLayer class 作为基类来创建所有类型的层。
To create a layer, you need to use NeuronLayer class as the base class to create all type of layers.
Example
from pybrain.structure.modules.neuronlayer import NeuronLayer
class LinearLayer(NeuronLayer):
def _forwardImplementation(self, inbuf, outbuf):
outbuf[:] = inbuf
def _backwardImplementation(self, outerr, inerr, outbuf, inbuf):
inerr[:] = outer
要创建层,我们需要实现两种方法:_forwardImplementation() 和 _backwardImplementation()。
To create a Layer, we need to implement two methods: _forwardImplementation() and _backwardImplementation().
The _forwardImplementation() takes in 2 arguments inbuf 和 outbuf,它们是 Scipy 数组。其大小取决于层的输入和输出维度。
The _forwardImplementation() takes in 2 arguments inbuf and outbuf, which are Scipy arrays. Its size is dependent on the layers’ input and output dimensions.
_backwardImplementation() 用于计算输出相对于给定输入的导数。
The _backwardImplementation() is used to calculate the derivative of the output with respect to the input given.
因此,要在 Pybrain 中实现一个层,这个层类的框架就是:
So to implement a layer in Pybrain, this is the skeleton of the layer class −
from pybrain.structure.modules.neuronlayer import NeuronLayer
class NewLayer(NeuronLayer):
def _forwardImplementation(self, inbuf, outbuf):
pass
def _backwardImplementation(self, outerr, inerr, outbuf, inbuf):
pass
如果您想实现一个二次多项式函数作为层,我们可以按照如下方式进行:
In case you want to implement a quadratic polynomial function as a layer, we can do so as follows −
考虑我们有一个多项式函数:
Consider we have a polynomial function as −
f(x) = 3x2
以上多项式函数的导数为:
The derivative of the above polynomial function will be as follows −
f(x) = 6 x
以上多项式函数的最终层类为:
The final layer class for the above polynomial function will be as follows −
testlayer.py
testlayer.py
from pybrain.structure.modules.neuronlayer import NeuronLayer
class PolynomialLayer(NeuronLayer):
def _forwardImplementation(self, inbuf, outbuf):
outbuf[:] = 3*inbuf**2
def _backwardImplementation(self, outerr, inerr, outbuf, inbuf):
inerr[:] = 6*inbuf*outerr
现在让我们利用创建的层,如下所示:
Now let us make use of the layer created as shown below −
testlayer1.py
testlayer1.py
from testlayer import PolynomialLayer
from pybrain.tools.shortcuts import buildNetwork
from pybrain.tests.helpers import gradientCheck
n = buildNetwork(2, 3, 1, hiddenclass=PolynomialLayer)
n.randomize()
gradientCheck(n)
GradientCheck() 将测试层运行是否良好。我们需要将层使用到的网络传递到 gradientCheck(n)。如果层运行良好,它将输出“Perfect Gradient”。
GradientCheck() will test whether the layer is working fine or not.We need to pass the network where the layer is used to gradientCheck(n).It will give the output as “Perfect Gradient” if the layer is working fine.