Neural Network Toolbox | Search  Help Desk |
newrb | Examples See Also |
net = newrb(P,T,goal,spread)
Radial basis networks can be used to approximate functions. newrb
adds neurons to the hidden layer of a radial basis network until it meets the specified mean squared error goal.
newrb(P,T,goal,spread)
takes two to four arguments,
P - R
x Q
matrix of Q
input vectors.
T - S
x Q
matrix of Q
target class vectors.
goal
- Mean squared error goal, default = 0.0.
spread
- Spread of radial basis functions, default = 1.0.
spread
is, the smoother the function approximation will be. Too large a spread means a lot of neurons will be required to fit a fast changing function. Too small a spread means many neurons will be required to fit a smooth function, and the network may not generalize well. Call newrb
with different spreads to find the best value for a given problem.
Here we design a radial basis network given inputs P
and targets T
.
P = [1 2 3]; T = [2.0 4.1 5.9]; net = newrb(P,T);Here the network is simulated for a new input.
P = 1.5; Y = sim(net,P)
newrb
creates a two layer network. The first layer has radbas
neurons, and calculates its weighted inputs with dist
, and its net input with netprod
. The second layer has purelin
neurons, and calculates its weighted input with dotprod
and its net inputs with netsum
. Both layers have biases.
Initially the radbas
layer has no neurons. The following steps are repeated until the network's mean squared error falls below goal
.
.
.
.radbas
neuron is added with weights equal to that vector.
.purelin
layer weights are redesigned to minimize error.
sim
,
newrbe
,
newgrnn
,
newpnn