| Neural Network Toolbox | Search  Help Desk |
| newlvq | Examples See Also |
Create a learning vector quantization network
net = newlvq(PR,S1,PC,LR,LF)
LVQ networks are used to solve classification problems.
net = newlvq(PR,S1,PC,LR,LF) takes these inputs,
PR - R x 2 matrix of min and max values for R input elements.
S1 - Number of hidden neurons.
PC - S2 element vector of typical class percentages.
LR - Learning rate, default = 0.01.
LF - Learning function, default = 'learnlv2'.
learnlv1 or learnlv2.
newlvq creates a two layer network. The first layer uses the compet transfer function, calculates weighted inputs with negdist, and net input with netsum. The second layer has purelin neurons, calculates weighted input with dotprod and net inputs with netsum. Neither layer has biases.
First layer weights are initialized with midpoint. The second layer weights are set so that each output neuron i has unit weights coming to it from PC(i) percent of the hidden neurons.
Adaption and training are done with adaptwb and trainwb1, which both update the first layer weights with the specified learning functions.
The input vectors P and target classes Tc below define a classification problem to be solved by an LVQ network.
P = [-3 -2 -2 0 0 0 0 +2 +2 +3; ... 0 +1 -1 +2 +1 -1 -2 +1 -1 0]; Tc = [1 1 1 2 2 2 2 1 1 1];The target classes
Tc are converted to target vectors T. Then, an LVQ network is created (with inputs ranges obtained from P, 4 hidden neurons, and class percentages of 0.6 and 0.4) and is trained.
T = ind2vec(Tc); net = newlvq(minmax(P),4,[.6 .4]); net = train(net,P,T);The resulting network can be tested.
Y = sim(net,P) Yc = vec2ind(Y)
sim, init, adapt, train, adaptwb, trainwb1, learnlv1, learnlv2