Neural Network Toolbox
  Go to function:
    Search    Help Desk 
trainwb    See Also

By-weight-&-bias 1-vector-at-a-time training function

Syntax

[net,tr] = trainwb(net,Pd,Tl,Ai,Q,TS,VV)

info = trainwb(code)

Description

trainwb is a network training function that updates weight and bias values according to Levenberg-Marquardt optimization.

trainwb(net,Pd,Tl,Ai,Q,TS,VV) takes these inputs,

and returns,

Training occurs according to the trainwb1's training parameters, shown here with their default values:

Dimensions for these variables are:

where

If VV is not [], it must be a structure of validation vectors,

which is used to stop training early if the network performance on the validation vectors fails to improve or remains the same for max_fail epochs in a row.

trainwb(code) returns useful information for each code string:

Network Use

You can create a standard network that uses trainwb with newp or newlin.

To prepare a custom network to be trained with trainwb:

   1.
Set net.trainFcn to 'trainwb'. (This will set net.trainParam to trainwb's default parameters.)
   2.
Set each net.inputWeights{i,j}.learnFcn to a learning function. Set each net.layerWeights{i,j}.learnFcn to a learning function. Set each net.biases{i}.learnFcn to a learning function. (Weight and bias learning parameters will automatically be set to default values for the given learning function.)
To train the network:

   1.
Set net.trainParam properties to desired values.
   2.
Set weight and bias learning parameters to desired values.
   3.
Call train.
See newp and newlin for training examples.

Algorithm

Each weight and bias updates according to its learning function after each epoch (one pass through the entire set of input vectors).

Training stops when any of these conditions occur:

   1.
The maximum number of epochs (repetitions) is reached.
   2.
Performance has been minimized to the goal.
   3.
The maximum amount of time has been exceeded.
   4.
Validation performance has increase more than max_fail times since the last time it decreased (when using validation).

See Also

newp, newlin, train



[ Previous | Help Desk | Next ]