0
6.6kviews
McCulloch - Pitts Neuron Model
1 Answer
0
458views

The Mc-Culloch-Pitts neuron was the earliest neural network discovered in 1943. It is usually called as M-P neuron. The M-P neurons are connected by directed weighted parts.

The activation function of an M-P neuron is binary, that is, any step the neuron may fire or may not fire. The weights associated with the communication links maybe excitatory w ($w\gt0$, weight is positive) or inhibitory p ($p\lt0$, weight is negative). All excitedly connected weights entering into a particular neuron will have the same weights. The threshold plays a major role in M-P neuron.

There is a fixed threshold for each neuron, and if the net input to the neuron is greater than the threshold then the neuron fires. Also, any non zero inhibitory input would prevent the neuron from firing. The M-P neurons are most widely used in the case of logic functions.

$\Rightarrow$ M-P neuron is excitatory with the weight ($w\gt0$) and inhibitory with weight $-p$ ($p\lt0$)

Input $x_1$ to $x_n$ posses excitatory weighted connections and input from $x_{n+1}$ to $x_{n+m}$ posses inhibitory weighted connection.

Since the firing of the neuron is based upon the threshold, the activation function here is defined as,

$f(y_{in})=\begin{cases} 1,\quad if\quad y_{ in }\ge \theta \\ 0,\quad if\quad y_{in}\lt\theta \end{cases}$

$\Rightarrow$ The M-P neuron has no particular training algorithm.

$\Rightarrow$ An analysis has to be performed to determine the values of the weights and the threshold.

$\Rightarrow$ Here, the weights of the neurone are set along with the threshold to make neuron perform a single logic function.

$\Rightarrow$ M-P neuron can be used as a building block on which we can model any function or phenomena, which can be represented as a logic function.

Please log in to add an answer.