0
25kviews
Explain perceptron convergence theorem
1 Answer
0
3.7kviews

Perceptron Convergence Theorem:

In the classification of linearly separable patterns belonging to two classes only, the training task for the classifier was to find the weight w such that.

(w^tx>0\hspace{0.4cm} for\hspace{0.2cm}each \hspace{0.2cm}x\in X_1\ w^tx<0\hspace{0.4cm} for\hspace{0.2cm}each \hspace{0.2cm}x\in X_2\)

Completion of training with the fixed correction training rule for any initial weight vector and any correction increment constant leads to the following weights:

w=wk0=wk0+1=wk0+2.....

with w as the solution vector for equation.

Integer k0 is the training step number starting at which no more misclassification occurs, and thus no right adjustments take place for (k_0>=0)

This theorem is called as the "Perceptron Convergence Theorem".

Perceptron Convergence theorem states that a classifier for two linearly separable classes of patterns is always trainable in a finite number of training steps.

In summary, the training of a single discrete perceptron two class classifier requires a change of weights if and only if a misclassification occurs.

In the reason for misclassification is (w^tx<0\) then all weights are increased in proportion wo xi . If \(w^tx>0) then all weights are decreased in proportion to xi

Summary of the Perceptron Convergence Algorithm:

Variables and Parameters: x(n)=(m+1) by 1 input vector

=[+1,x1(n),x2(n),.....xm(n)]T

w(n)=(m+1) by 1 weight vector

=[b(n),w1(n),w2(n),.....wm(n)]T 

b(n)= bias

y(n)= actual response

d(n)= desired response

η= learning rate parameter, a +ve constant less than unity

1. Initialization: Set w(0)=0 , then perform the following computations for time step n=1,2

2. Activation: At time step n, activate the perceptron by applying input vector x(n) and desired response d(n).

3. Computation of actual response: Compute the actual response of the perceptron:

y(n)=sgn[wT(x)x(n)]

4. Adaptation of weight vector: Update the weight vector of the perceptron:

w(n+1)=w(n)+η[d(n)y(n)]x(n) 

5. Continuation: Increment time step n by 1, go to step 1

Please log in to add an answer.