| Neural Network Toolbox | Search  Help Desk |
| newp | Examples See Also |
net = newp(pr,s,tf,lf)
Perceptrons are used to solve simple (i.e. linearly separable) classification problems.
net = newp(PR,S,TF,LF) takes these inputs,
PR - R x 2 matrix of min and max values for R input elements.
TF - Transfer function, default = 'hardlim'.
LF - Learning function, default = 'learnp'.
TF can be hardlim or hardlims. The learning function LF can be learnp or learnpn.
Call newp without input arguments to define the network's attributes in a dialog window.
Perceptrons consist of a single layer with the dotprod weight function, the netsum net input function, and the specified transfer function.
The layer has a weight from the input and a bias.
Weights and biases are initialized with initzero.
Adaption and training are done with adaptwb and trainwb, which both update weight and bias values with the specified learning function. Performance is measured with mae.
This code creates a perceptron layer with one 2-element input (ranges [0 1] and [-2 2]) and one neuron. (Supplying only two arguments to newp results in the default perceptron learning function learnp being used.)
net = newp([0 1; -2 2],1);Here we simulate the network to a sequence of inputs
P.
P1 = {[0; 0] [0; 1] [1; 0] [1; 1]};
Y = sim(net,P1)
Here we define a sequence of targets T (together P and T define the operation of an AND gate), and then let the network adapt for 10 passes through the sequence. We then simulate the updated network.
T1 = {0 0 0 1};
net.adaptParam.passes = 10;
net = adapt(net,P1,T1);
Y = sim(net,P1)
Now we define a new problem, an OR gate, with batch inputs P and targets T.
P2 = [0 0 1 1; 0 1 0 1]; T2 = [0 1 1 1];Here we initialize the perceptron (resulting in new random weight and bias values), simulate its output, train for a maximum of 20 epochs, and then simulate it again.
net = init(net); Y = sim(net,P2) net.trainParam.epochs = 20; net = train(net,P2,T2); Y = sim(net,P2)Perceptrons can classify linearly separable classes in a finite amount of time. If input vectors have a large variance in their lengths, the
learnpn can be faster than learnp.
sim, init, adapt, train, hardlim, hardlims, learnp, learnpn