0
3.7kviews
Learning factors
1 Answer
0
298views

Learning factors:

The factors that improve the convergence of EBPTA are called as learning factors

The factors are as follows:

  1. Initial weights
  2. Steepness of activation function
  3. Learing constant
  4. Momentum
  5. Network architecture
  6. Necessary number of hidden neurons

     

1. Initial weights:

  • The weights of the network to be trained are typically initialized at small random values.
  • The initialization strongly affects the ultimate solution

     

2. Steepness of activation function

  • The neuron’s continuous activation function is characterized by its steepness factor
  • Also the derivatice of the activation function serves as a multiplying factor in building components of the error signal vectors.

     

3. Learing constant:

  • The effectiveness and convergence of the error back propagation learning algorithm depen significantly on the value of the learning constant.

     

4. Momentum:

  • The purpose of the momentum method is ti accelerate the convergence of the error back propagation learning algorithm.
  • The method involves supplementing the current weight adjustment with a fraction of the most recent weight adjustment.

     

5. Network architecture:

  • One of the most important attributes if a layerd neural network design is choosing the architecture
  • The number of input nodes is simply determined by the dimension or size of the input vector to be classified. The input vector size usually corresponds to the total number of distinct features of the input patterns.

     

6. Necessary number of hidden neurons:

  • This problem of choice of size of the hidden layer is under intensive study with no conclusive answers available.
  • One formula can be used to find out how many hidden layer neurons need to be used to achieve classification into M classes in x dimensional patterns space.
Please log in to add an answer.