Neural Network Toolbox | Search  Help Desk |
newlin | Examples See Also |
net = newlin(PR,S,ID,LR)
new = newlin
Linear layers are often used as adaptive filters for signal processing and prediction.
newlin(PR,S,ID,LR)
takes these arguments,
PR - R
x 2
matrix of min and max values for R
input elements.
S -
Number of elements in the output vector.
ID -
Input delay vector, default = [0].
LR -
Learning rate, default = 0.01.
newlin(PR,S,0,P)
takes an alternate argument,
P -
Matrix of input vectors.
P
.
Call newlin
without input arguments to define the network's attributes in a dialog window.
This code creates a single input (range of [-1 1] linear layer with one neuron, input delays of 0 and 1, and a learning rate of 0.01. It is simulated for an input sequence P1
.
net = newlin([-1 1],1,[0 1],0.01); P1 = {0 -1 1 1 0 -1 1 0 0 1}; Y = sim(net,P1)Here targets
T1
are defined and the layer adapts to them. (Since this is the first call to adapt, the default input delay conditions are used.)
T1 = {0 -1 0 2 1 -1 0 1 0 1}; [net,Y,E,Pf] = adapt(net,P1,T1); YHere the linear layer continues to adapt for a new sequence using the previous final conditions
PF
as initial conditions.
P2 = {1 0 -1 -1 1 1 1 0 -1}; T2 = {2 1 -1 -2 0 2 2 1 0}; [net,Y,E,Pf] = adapt(net,P2,T2); YHere we initialize the layer's weights and biases to new values.
net = init(net);Here we train the newly initialized layer on the entire sequence for 200 epochs to an error goal of 0.1.
P3 = [P1 P2]; T3 = [T1 T2]; net.trainParam.epochs = 200; net.trainParam.goal = 0.1; net = train(net,P3,T3); Y = sim(net,[P1 P2])Linear layers consist of a single layer with the
dotprod
weight function, netsum
net input function, and purelin
transfer function.
The layer has a weight from the input and a bias.
Weights and biases are initialized with initzero
.
Adaption and training are done with adaptwb
and trainwb
, which both update weight and bias values with learnwh
. Performance is measured with mse
.
newlind
,
sim
,
init
,
adapt
,
train