Neural Network Toolbox | Search  Help Desk |
newpnn | Examples See Also |
Design a probabilistic neural network
net = newpnn(P,T,spread)
Probabilistic neural networks are a kind of radial basis network suitable for classification problems.
net = newpnn(P,T,spread)
takes two or three arguments,
P - R
x Q
matrix of Q
input vectors.
T - S
x Q
matrix of Q
target class vectors.
spread
- Spread of radial basis functions, default = 0.1.
spread
is near zero the network will act as a nearest neighbor classifier. As spread
becomes larger the designed network will take into account several nearby design vectors.
Here a classification problem is defined with a set of inputs P
and class indices Tc
.
P = [1 2 3 4 5 6 7]; Tc = [1 2 3 2 2 3 1];Here the class indices are converted to target vectors, and a PNN is designed and tested.
T = ind2vec(Tc) net = newpnn(P,T); Y = sim(net,P) Yc = vec2ind(Y)
newpnn
creates a two layer network. The first layer has radbas
neurons, and calculates its weighted inputs with dist
, and its net input with netprod
. The second layer has compet
neurons, and calculates its weighted input with dotprod
and its net inputs with netsum
. Only the first layer has biases.
newpnn
sets the first layer weights to P
', and the first layer biases are all set to 0.8326/spread
resulting in radial basis functions that cross 0.5 at weighted inputs of +/- spread
. The second layer weights W2
are set to T
.
sim
,
ind2vec
,
vec2ind
,
newrb
,
newrbe
,
newgrnn
Wasserman, P.D., Advanced Methods in Neural Computing, New York: Van Nostrand Reinhold, pp. 35-55, 1993.