| Neural Network Toolbox | Search  Help Desk | 
| newsom | Examples See Also | 
net = newsom(PR,[d1,d2,...],tfcn,dfcn,olr,osteps,tlr,tnd)
Competitive layers are used to solve classification problems.
net = newsom (PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND) takes,
PR - R x 2 matrix of min and max values for R input elements.
I - Size of ith layer dimension, defaults = [5 8].
TFCN - Topology function, default ='hextop'.
DFCN - Distance function, default ='linkdist'.
OLR - Ordering phase learning rate, default = 0.9.
OSTEPS - Ordering phase steps, default = 1000.
TLR - Tuning phase learning rate, default = 0.02;
TND - Tuning phase neighborhood distance, default = 1.
TFCN can be hextop, gridtop, or randtop. The distance function can be linkdist, dist, or mandist.
Simms consist of a single layer with the negdist weight function, netsum net input function, and the compet transfer function.
The layer has a weight from the input, but no bias. The weight is initialized with midpoint.
Adaption and training are done with adaptwb and trainwb1, which both update the weight with learnsom.
The input vectors defined below are distributed over an 2-dimension input space varying over [0 2] and [0 1]. This data will be used to train a SOM with dimensions [3 5].
P = [rand(1,400)*2; rand(1,400)];
net = newsom([0 2; 0 1],[3 5]);
plotsom(net.layers{1}.positions)
Here the SOM is trained and the input vectors are plotted with the map which the SOM's weights have formed.
net = train(net,P);
plot(P(1,:),P(2,:),'.g','markersize',20)
hold on
plotsom(net.iw{1,1},net.layers{1}.distances)
hold off
sim, init, adapt, train, adaptwb, trainwb1