| Neural Network Toolbox | Search  Help Desk |
| learngd | Examples See Also |
Gradient descent weight/bias learning function
[dW,LS] = learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = learngd(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngd(code)
learngd is the gradient descent weight/bias learning function.
learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - S x R weight matrix (or S x 1 bias vector).
P - R x Q input vectors (or ones(1,Q)).
Z - S x Q weighted input vectors.
T - S x Q layer target vectors.
E - S x Q layer error vectors.
gW - S x R gradient with respect to performance.
gA - S x Q output gradient with respect to performance.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
learngd's learning parameter shown here with its default value.
LP.lr - 0.01 - Learning rate.
learngd(code) returns useful information for each code string:
'pnames' - Names of learning parameters.
'pdefaults' - Default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
gW for a weight going to a layer with 3 neurons, from an input with 2 elements. We also define a learning rate of 0.5.
gW = rand(3,2); lp.lr = 0.5;Since
learngd only needs these values to calculate a weight change (see algorithim below), we will use them to do so.
dW = learngd([],[],[],[],[],[],[],gW,[],[],lp,[])You can create a standard network that uses
learngd with newff, newcf, or newelm. To prepare the weights and the bias of layer i of a custom network to adapt with learngd:
.net.adaptFcn to 'adaptwb'. net.adaptParam will automatically become
trainwb's default parameters.
.net.inputWeights{i,j}.learnFcn to 'learngd'. Set each
net.layerWeights{i,j}.learnFcn to 'learngd'. Set
net.biases{i}.learnFcn to 'learngd'. Each weight and bias learning
parameter property will automatically be set to learngd's default
parameters.
.net.adaptParam properties to desired values.
.adapt with the network.
newff or newcf for examples.
learngd calculates the weight change dW for a given neuron from the neuron's input P and error E, and the weight (or bias) learning rate LR, according to the gradient descent: dw = lr*gW.
learngdm, newff, newcf, adaptwb, trainwb, adapt, train