written 3.5 years ago by |
Hebbian Learning Rule:
- It is unsupervised learning rule
- It works on both binary and continuous activation function.
- It is of single neuron layer type learning rule.
In hebbian learning weight change will be calculated as follows:
$\Delta w=C.O_i.X_j$
- The initial weight vector will be 0.
Example of Hebbian Learning Rule:
Determine the weight after one iteration for hebbian learning of a single neuron network starting with initial weight $w = \begin{bmatrix} 1 & -1 \end{bmatrix}^t$ input as
$X_1 = \begin{bmatrix} 1 & -2 \end{bmatrix}^t\\ X_2 = \begin{bmatrix} 2 & 3 \end{bmatrix}^t\\ X_3 = \begin{bmatrix} 1 & -1 \end{bmatrix}^t$
Use bipolar binary activation function.
Solution:
Step 1:
$net_1=w_1.X_1\\ = \begin{bmatrix} 1 & -1 \end{bmatrix}. \begin{bmatrix} 1 \\-2 \end{bmatrix}\\ =3\\ O_1=1\\ \Delta w^1=c.r.X_1\\ =c.O_1.X_1\\ =1*1* \begin{bmatrix} 1 \\-2 \end{bmatrix}\\= \begin{bmatrix} 1 \\-2 \end{bmatrix}\\ w_2=w_1+\Delta w^1\\ = \begin{bmatrix} 1 \\-1 \end{bmatrix}+ \begin{bmatrix} 1 \\-2 \end{bmatrix}\\= \begin{bmatrix} 2 \\-3 \end{bmatrix}\\$ Step 2: $net_2=w_2.X_2\ = \begin{bmatrix} 2 & -3 \end{bmatrix}. \begin{bmatrix} 2 \3 \end{bmatrix}\ =-5\ O_2=-1\ \Delta w^2=c.r.X_2\ =c.O_2.X_2\ =1-1 \begin{bmatrix} 2 \3 \end{bmatrix}\= \begin{bmatrix} -2 \-3 \end{bmatrix}\ w_3=w_2+\Delta w^2\ = \begin{bmatrix} 2 \-3 \end{bmatrix}+ \begin{bmatrix} -2 \-3 \end{bmatrix}\= \begin{bmatrix} 0 \-6 \end{bmatrix}\$
Step 3:
$net_3=w_3.X_3\ = \begin{bmatrix} 0 & -6 \end{bmatrix}. \begin{bmatrix} 1 \-1 \end{bmatrix}\ =6\ O_3=1\ \Delta w^3=c.r.X_3\ =c.O_3.X_3\ =11 \begin{bmatrix} 1 \-1 \end{bmatrix}\= \begin{bmatrix} 1 \-1 \end{bmatrix}\ w_4=w_3+\Delta w^3\ = \begin{bmatrix} 0 \-6 \end{bmatrix}+ \begin{bmatrix} 1 \-1 \end{bmatrix}\= \begin{bmatrix} 1 \-7 \end{bmatrix}\$