| Neural Network Toolbox | Search  Help Desk |
| learnlv1 | Examples See Also |
[dW,LS] = learnlv1(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnlv1(code)
learnlv1 is the LVQ1 weight learning function.
learnlv1(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - S x R weight matrix (or S x 1 bias vector).
P - R x Q input vectors (or ones(1,Q)).
Z - S x Q weighted input vectors.
T - S x Q layer target vectors.
E - S x Q layer error vectors.
gW - S x R weight gradient with respect to performance.
gA - S x Q output gradient with respect to performance.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
learnlv1's learning parameter shown here with its default value.
LP.lr - 0.01 - Learning rate.
learnlv1(code) returns useful information for each code string:
'pnames' - Names of learning parameters.
'pdefaults' - Default learning parameters.
needg' - Returns 1 if this function uses gW or gA.
P, output A, weight matrix W, and output gradient gA for a layer with a 2-element input and 3 neurons.
We also define the learning rate LR.
p = rand(2,1); w = rand(3,2); a = compet(negdist(w,p)); gA = [-1;1; 1]; lp.lr = 0.5;Since
learnlv1 only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
dW = learnlv1(w,p,[],[],a,[],[],[],gA,[],lp,[])You can create a standard network that uses
learnlv1 with newlvq. To prepare the weights of layer i of a custom network to learn with learnlv1:
.net.trainFcn to trainwb1'. (net.trainParam will automatically become
trainwb1's default parameters.)
.net.adaptFcn to 'adaptwb'. (net.adaptParam will automatically become
trainwb1's default parameters.)
.net.inputWeights{i,j}.learnFcn to 'learnlv1'. Set each
net.layerWeights{i,j}.learnFcn to 'learnlv1'. (Each weight learning
parameter property will automatically be set to learnlv1's default
parameters.)
.net.trainParam (or net.adaptParam) properties as desired.
.train (or adapt).
learnlv1 calculates the weight change dW for a given neuron from the neuron's input P, output A, output gradient gA and learning rate LR, according to the LVQ1 rule, given i the index of the neuron whose output a(i) is 1:
dw(i,:) = +lr*(p-w(i,:)) if gA(i) = 0;= -lr*(p-w(i,:)) if gA(i) = -1
learnlv2, adaptwb, trainwb, adapt, train