Open Access Open Access  Restricted Access Subscription Access

Improving the Character Recognition Efficiency of Feed forward BP Neural Network


Affiliations
1 Department of Computer Science, Maharaja Surajmal Institute, New Delhi, India
2 Department of Computer Science and Engineering, TITS, Bhiwani, Haryana, India
 

This work is focused on improving the character recognition capability of feed-forward back-propagation neural network by using one, two and three hidden layers and the modified additional momentum term. 182 English letters were collected for this work and the equivalent binary matrix form of these characters was applied to the neural network as training patterns. While the network was getting trained, the connection weights were modified at each epoch of learning. For each training sample, the error surface was examined for minima by computing the gradient descent. We started the experiment by using one hidden layer and the number of hidden layers was increased up to three and it has been observed that accuracy of the network was increased with low mean square error but at the cost of training time. The recognition accuracy was improved further when modified additional momentum term was used.

Keywords

Character Recognition, MLP, Hidden Layers, Back-Propagation, Momentum Term.
User
Notifications
Font Size

Abstract Views: 210

PDF Views: 123




  • Improving the Character Recognition Efficiency of Feed forward BP Neural Network

Abstract Views: 210  |  PDF Views: 123

Authors

Amit Choudhary
Department of Computer Science, Maharaja Surajmal Institute, New Delhi, India
Rahul Rishi
Department of Computer Science and Engineering, TITS, Bhiwani, Haryana, India

Abstract


This work is focused on improving the character recognition capability of feed-forward back-propagation neural network by using one, two and three hidden layers and the modified additional momentum term. 182 English letters were collected for this work and the equivalent binary matrix form of these characters was applied to the neural network as training patterns. While the network was getting trained, the connection weights were modified at each epoch of learning. For each training sample, the error surface was examined for minima by computing the gradient descent. We started the experiment by using one hidden layer and the number of hidden layers was increased up to three and it has been observed that accuracy of the network was increased with low mean square error but at the cost of training time. The recognition accuracy was improved further when modified additional momentum term was used.

Keywords


Character Recognition, MLP, Hidden Layers, Back-Propagation, Momentum Term.