Neural Network Toolbox | Search  Help Desk |
adapt | Examples See Also |
Allow a neural network to adapt
[net,Y,E,Pf,Af] = adapt(net,P,T,Pi,Ai)
Type help network/adapt
[net,Y,E,Pf,Af] = adapt(net,P,T,Pi,Ai)
takes,
T -
Network targets, default = zeros.
Pi -
Initial input delay conditions, default = zeros.
Ai -
Initial layer delay conditions, default = zeros.
net.adaptFcn
with the adaption parameters net.adaptParam
:
Note that T
is optional and only needs to be used for networks that require targets. Pi
and Pf
are also optional and only needs to be used for networks that have input or layer delays.
adapt
's signal arguments can have two formats: cell array or matrix.
The cell array format is easiest to describe. It is most convenient to be used for networks with multiple inputs and outputs, and allows sequences of inputs to be presented:
P - Ni
x TS
cell array, each element P{i,ts}
is an Ri x Q
matrix.
T - Nt x TS
cell array, each element P{i,ts}
is a Vi x Q
matrix.
Pi - Ni x ID
cell array, each element Pi{i,k}
is an Ri x Q
matrix.
Ai - Nl x LD
cell array, each element Ai{i,k}
is an Si x Q
matrix.
Y - NO x TS
cell array, each element Y{i,ts}
is a Ui x Q
matrix.
Pf - Ni x ID
cell array, each element Pf{i,k}
is an Ri x Q
matrix.
Af - Nl x LD
cell array, each element Af{i,k}
is an Si x Q
matrix.
Vi = net.targets{i}.size
Pi
, Pf
, Ai
, and Af
are ordered from oldest delay condition to most recent:
Pi{i,k}
= input i
at time ts = k-ID
.
Pf{i,k}
= input i
at time ts = TS+k-ID
.
Ai{i,k}
= layer output i
at time ts = k-LD
.
Af{i,k}
= layer output i
at time ts = TS+k-LD
.
TS = 1
). It is convenient for network's with only one input and output, but can be used with networks that have more.
Each matrix argument is found by storing the elements of the corresponding cell array argument in a single matrix:
Pi -
(sum of Ri
) x (ID*Q
) matrix.
Ai -
(sum of Si
) x (LD*Q
) matrix.
Pf -
(sum of Ri
) x (ID*Q
) matrix.
Af -
(sum of Si
) x (LD*Q
) matrix.
T1
is known to depend on P1
) are used to define the operation of a filter.
p1 = {-1 0 1 0 1 1 -1 0 -1 1 0 1}; t1 = {-1 -1 1 1 1 2 0 -1 -1 0 1 1};Here
newlin
is used to create a layer with an input range of [-1 1]
), one neuron, input delays of 0 and 1, and a learning rate of 0.5. The linear layer is then simulated.
net = newlin([-1 1],1,[0 1],0.5);Here the network adapts for one pass through the sequence. The network's mean squared error is displayed. (Since this is the first call of
adapt
the default Pi
is used.)
[net,y,e,pf] = adapt(net,p1,t1); mse(e)Note the errors are quite large. Here the network adapts to another 12 time steps (using the previous
Pf
as the new initial delay conditions.)
p2 = {1 -1 -1 1 1 -1 0 0 0 1 -1 -1}; t2 = {2 0 -2 0 2 0 -1 0 0 1 0 -1}; [net,y,e,pf] = adapt(net,p2,t2,pf); mse(e)Here the network adapts for 100 passes through the entire sequence.
p3 = [p1 p2]; t3 = [t1 t2]; net.adaptParam.passes = 100; [net,y,e] = adapt(net,p3,t3); mse(e)The error after 100 passes through the sequence is very small. The network has adapted to the relationship between the input and target signals.
adapt
calls the function indicated by net.adaptFcn,
using the adaption parameter values indicated by net.adaptParam
.
Given an input sequence with TS
steps the network is updated as follows. Each step in the sequence of inputs is presented to the network one at a time. The network's weight and bias values are updated after each step, before the next step in the sequence is presented. Thus the network is updated TS
times.
sim
, init
, train