| Neural Network Toolbox | Search  Help Desk |
| learnpn | Examples See Also |
Normalized perceptron weight/bias learning function
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnpn(code)
learnpn is a weight/bias learning function. It can result in faster learning than learnp when input vectors have widely varying magnitudes.
learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - S x R weight matrix (or S x 1 bias vector).
P - R x Q input vectors (or ones(1,Q)).
Z - S x Q weighted input vectors.
T - S x Q layer target vectors.
E - S x Q layer error vectors.
gW - S x R weight gradient with respect to performance.
gA - S x Q output gradient with respect to performance.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
learnpn(code) returns useful information for each code string:
'pnames' - Names of learning parameters.
'pdefaults' - Default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
P and error E to a layer with a 2-element input and 3 neurons.
p = rand(2,1); e = rand(3,1);Since
learnpn only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
dW = learnpn([],p,[],[],[],[],e,[],[],[],[],[])You can create a standard network that uses
learnpn with newp.
To prepare the weights and the bias of layer i of a custom network to learn with learnpn:
.net.trainFcn to 'trainwb'. (net.trainParam will automatically become
trainwb's default parameters.)
.net.adaptFcn to 'adaptwb'. (net.adaptParam will automatically become
trainwb's default parameters.)
.net.inputWeights{i,j}.learnFcn to 'learnpn'. Set each
net.layerWeights{i,j}.learnFcn to 'learnpn'. Set
net.biases{i}.learnFcn to 'learnpn'. (Each weight and bias learning
parameter property will automatically become the empty matrix since
learnpn has no learning parameters.)
.net.trainParam (net.adaptParam) properties to desired values.
.train (adapt).
newp for adaption and training examples.
learnpn calculates the weight change dW for a given neuron from the neuron's input P and error E according to the normalized perceptron learning rule:
pn = p / sqrt(1 + p(1)^2 + p(2)^2) + ... + p(R)^2) dw = 0, if e = 0 = pn', if e = 1 = -pn', if e = -1The expression for
dW can be summarized as:
dw = e*pn'Perceptrons do have one real limitation. The set of input vectors must be linearly separable if a solution is to be found. That is, if the input vectors with targets of 1 cannot be separated by a line or hyperplane from the input vectors associated with values of 0, the perceptron will never be able to classify them correctly.
learnp,newp,adaptwb,trainwb,adapt,train