Neural Network Toolbox | Search  Help Desk |
newelm | Examples See Also |
Create an Elman backpropagation network
net = newelm(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
newelm(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
takes several arguments,
PR - R
x 2
matrix of min and max values for R
input elements.
Si -
Size of ith layer, for Nl
layers.
TFi -
Transfer function of ith layer, default = 'tansig
'.
BTF -
Backprop network training function, default = 'traingdx
'.
BLF -
Backprop weight/bias learning function, default = 'learngdm
'.
PF -
Performance function, default = 'mse
'.
BTF
can be any of the backprop training functions such as trainlm, trainbfg
, trainrp
, traingd
, etc.
WARNING: trainlm is the default training function because it is very fast, but it requires a lot of memory to run. If you get an "out-of-memory" error when training try doing one of these:
.net.trainParam.mem_reduc
to 2 or more. (See trainlm
.)
.trainbfg
, which is slower but more memory-efficient than trainlm.
.trainrp
which is slower but more memory-efficient than trainbfg
.
BLF
can be either of the backpropagation learning functions such as learngd
, or learngdm
.
The performance function can be any of the differentiable performance functions such as mse
or msereg
.
Here is a series of Boolean inputs P
, and another sequence T
, which is 1 wherever P
has had two 1's in a row.
P = round(rand(1,20)); T = [0 (P(1:end-1)+P(2:end) == 2)];We would like the network to recognize whenever two 1's occur in a row. First we arrange these values as sequences.
Pseq = con2seq(P); Tseq = con2seq(T);Next we create an Elman network whose input varies from 0 to 1, and has five hidden neurons and 1 output.
net = newelm([0 1],[10 1],{'tansig','logsig'});Then we train the network with a mean squared error goal of 0.1, and simulate it.
net = train(net,Pseq,Tseq); Y = sim(net,Pseq)Elman networks consist of
Nl
layers using the dotprod
weight function, netsum
net input function, and the specified transfer functions.
The first layer has weights coming from the input. Each subsequent layer has a weight coming from the previous layer. All layers except the last have a recurrent weight. All layers have biases. The last layer is the network output.
Each layer's weights and biases are initialized with initnw
.
Adaption is done with adaptwb
which updates weights with the specified learning function. Training is done with the specified training function. Performance is measured according to the specified performance function.
newff
,
newcf
,
sim
,
init
,
adapt
,
train