Neural Network Toolbox | Search  Help Desk |
learnlv2 | Examples See Also |
[dW,LS] = learnlv2(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnlv2(code)
learnlv2
is the LVQ2
weight learning function.
learnlv2(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W - S
x R
weight matrix (or S
x 1
bias vector).
P - R
x Q
input vectors (or ones(1,Q)
).
Z - S
x Q
weighted input vectors.
T - S
x Q
layer target vectors.
E - S
x Q
layer error vectors.
gW - S
x R
weight gradient with respect to performance.
gA - S
x Q
output gradient with respect to performance.
LP -
Learning parameters, none, LP = []
.
LS -
Learning state, initially should be = []
.
learnlv1
's learning parameter, shown here with its default value.
LP.lr - 0.01 -
Learning rate.
learnlv2(code)
returns useful information for each code
string:
'pnames
' - Names of learning parameters.
'pdefaults
' - Default learning parameters.
'needg
' - Returns 1 if this function uses gW
or gA
.
P
, output A
, weight matrix W
, and output gradient gA
for a layer with a 2-element input and 3 neurons.
We also define the learning rate LR
.
p = rand(2,1); w = rand(3,2); n = negdist(w,p); a = compet(n); gA = [-1;1; 1]; lp.lr = 0.5;Since learnlv2 only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
dW = learnlv2(w,p,[],n,a,[],[],[],gA,[],lp,[])You can create a standard network that uses
learnlv2
with newlvq
.
To prepare the weights of layer i
of a custom network to learn with learnlv2
:
.net.trainFcn
to 'trainwb1
'. (net.trainParam
will automatically
become trainwb1
's default parameters.)
.net.adaptFcn
to 'adaptwb
'. (net.adaptParam
will automatically become
trainwb1
's default parameters.)
.net.inputWeights{i,j}.learnFcn
to 'learnlv2
'. Set each
net.layerWeights{i,j}.learnFcn
to 'learnlv2
'. (Each weight learning
parameter property will automatically be set to learnlv2
's default
parameters.)
.net.trainParam
(or net.adaptParam
) properties as desired.
.train
(or adapt
).
learnlv2
calculates the weight change dW
for a given neuron from the neuron's input P
, output A
, output gradient gA
and learning rate LR
according to the LVQ2
rule, given i
the index of the neuron whose output a(i)
is 1:
dw(i,:) = +lr*(p-w(i,:))
if gA(i) = 0
; = -lr*(p-w(i,:))
If gA(i) = -1
; if gA(i)
is -1 then the index j
is found of the neuron with the greatest net input n(k)
, from the neurons whose gA(k)
is 1. This neuron's weights are updated as follows:
dw(j,:) = +lr*(p-w(i,:))
learnlv1
,
adaptwb
,
trainwb
,
adapt
,
train