Neural Network Toolbox | Search  Help Desk |
learnsom | Examples See Also |
Self-organizing map weight learning function
[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsom(code)
learnsom
is the self-organizing map weight learning function.
learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W - S
x R
weight matrix (or S
x 1
bias vector).
P - R
x Q
input vectors (or ones(1,Q)
).
Z - S
x Q
weighted input vectors.
T - S
x Q
layer target vectors.
E - S
x Q
layer error vectors.
gW - S
x R
weight gradient with respect to performance.
gA - S
x Q
output gradient with respect to performance.
LP -
Learning parameters, none, LP = []
.
LS -
Learning state, initially should be = []
.
learnsom
's learning parameter, shown here with its default value.
LP.order_lr 0.9
Ordering phase learning rate.
LP.order_steps 1000
Ordering phase steps.
LP.tune_lr 0.02
Tuning phase learning rate.
LP.tune_nd 1
Tuning phase neighborhood distance.
earnpn(code)
returns useful information for each code
string:
'pnames
' - Names of learning parameters.
'pdefaults
' - Default learning parameters.
'needg
' - Returns 1 if this function uses gW
or gA
.
P
, output A
, and weight matrix W
, for a layer with a 2-element input and 6 neurons. We also calculate positions and distances for the neurons which are arranged in a 2x3 hexagonal pattern. Then we define the four learning parameters.
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp.order_lr = 0.9; lp.order_steps = 1000; lp.tune_lr = 0.02; lp.tune_nd = 1;Since
learnsom
only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
ls = []; [dW,ls] = learnsom(w,p,[],[],a,[],[],[],[],d,lp,ls)You can create a standard network that uses
learnsom
with newsom
.
.net.trainFcn
to 'trainwb1
'. (net.trainParam
will automatically
become trainwb1
's default parameters.)
.net.adaptFcn
to 'adaptwb
'. (net.adaptParam
will automatically become
trainwb1
's default parameters.)
.net.inputWeights{i,j}.learnFcn
to 'learnsom
'. Set each
net.layerWeights{i,j}.learnFcn
to 'learnsom
'. Set
net.biases{i}.learnFcn
to 'learnsom
'. (Each weight learning parameter
property will automatically be set to learnsom
's default parameters.)
.net.trainParam
(net.adaptParam
) properties to desired values.
.train
(adapt
).
learnsom
calculates the weight change dW
for a given neuron from the neuron's input P
, activation A2
, and learning rate LR
:
dw = lr*a2*(p'-w)where the activation
A2
is found from the layer output A
and neuron distances D
and the current neighborhood size ND
:
a2(i,q) = 1, if a(i,q) = 1 = 0.5, if a(j,q) = 1 and D(i,j) <= nd = 0, otherwiseThe learning rate
LR
and neighborhood size NS
are altered through two phases: an ordering phase and a tuning phase.
The ordering phases lasts as many steps as LP.order_steps
. During this phase LR
is adjusted from LP.order_lr
down to LP.tune_lr
, and ND
is adjusted from the maximum neuron distance down to 1. It is during this phase that neuron weights are expected to order themselves in the input space consistent with the associated neuron positions.
During the tuning phase LR
decreases slowly from LP.tune_lr
and ND
is always set to LP.tune_nd
. During this phase the weights are expected to spread out relatively evenly over the input space while retaining their topological order found during the ordering phase.
adaptwb
,
trainwb
,
adapt
,
train