Neural Network Toolbox | Search  Help Desk |
learnwh | Examples See Also |
Widrow-Hoff weight/bias learning function
[dW,LS] = learnwh(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = learnwh(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnwh(code)
learnwh
is the Widrow-Hoff weight/bias learning function, and is also known as the delta or least mean squared (LMS) rule.
learnwh(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W - S
x R
weight matrix (or S
x 1
bias vector).
P - R
x Q
input vectors (or ones(1,Q)
).
Z - S
x Q
weighted input vectors.
T - S
x Q
layer target vectors.
E - S
x Q
layer error vectors.
gW - S
x R
weight gradient with respect to performance.
gA - S
x Q
output gradient with respect to performance.
LP -
Learning parameters, none, LP = []
.
LS -
Learning state, initially should be = []
.
learnwh
's learning parameter shown here with its default value.
LP.lr - 0.01 -
Learning rate.
learnwh(code)
returns useful information for each code
string:
'pnames
' - Names of learning parameters.
'pdefaults
' - Default learning parameters.
'needg
' - Returns 1 if this function uses gW
or gA
.
P
and error E
to a layer with a 2-element input and 3 neurons. We also define the learning rate LR
learning parameter.
Sincep = rand(2,1);
e = rand(3,1);
lp.lr = 0.5;
learnwh
only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
dW = learnwh([],p,[],[],[],[],e,[],[],[],lp,[])You can create a standard network that uses
learnwh
with newlin
.
To prepare the weights and the bias of layer i
of a custom network to learn with learnwh
:
. net.trainFcn
to 'trainwb
'. net.trainParam
will automatically become
trainwb
's default parameters.
.net.adaptFcn
to 'adaptwb
'. net.adaptParam
will automatically become
trainwb
's default parameters.
.net.inputWeights{i,j}.learnFcn
to 'learnwh
'. Set each
net.layerWeights{i,j}.learnFcn
to 'learnwh
'. Set
net.biases{i}.learnFcn
to 'learnwh
'.
learnwh
's default parameters.
To train the network (or enable it to adapt):
.net.trainParam (net.adaptParam)
properties to desired values.
.train(adapt)
.
newlin
for adaption and training examples.
learnwh
calculates the weight change dW
for a given neuron from the neuron's input P
and error E
, and the weight (or bias) learning rate LR
, according to the Widrow-Hoff learning rule:
dw = lr*e*pn'
newlin
,
adaptwb
,
trainwb
,
adapt
,
train
Widrow, B., and M. E. Hoff, "Adaptive switching circuits," 1960 IRE WESCON Convention Record, New York IRE, pp. 96-104, 1960.
Widrow B. and S. D. Sterns, Adaptive Signal Processing, New York: Prentice-Hall, 1985.