| Neural Network Toolbox | Search  Help Desk |
| hardlim | Examples See Also |

A = hardlim(N)
info = hardlim(code)
The hard limit transfer function forces a neuron to output a 1 if its net input reaches a threshold, otherwise it outputs 0. This allows a neuron to make a decision or classification. It can say yes or no. This kind of neuron is often trained with the perceptron learning rule.
hardlim is a transfer function. Transfer functions calculate a layer's output from its net input.
hardlim(N) takes one input,
N - S x Q matrix of net input (column) vectors.
N is positive, 0 elsewhere.
hardlim(code) returns useful information for each code string,
Here is the code to create a plot of the hardlim transfer function.
n = -5:0.1:5; a = hardlim(n); plot(n,a)You can create a standard network that uses hardlim by calling
newp.
To change a network so that a layer uses hardlim, set net.layers{i}.transferFcn to 'hardlim'.
In either case call sim to simulate the network with hardlim.
See newp for simulation examples.
hardlim(n) = 1, if n >= 0; 0 otherwise.
sim, hardlims