Open Access Open Access  Restricted Access Subscription Access

Stochastic Search Algorithms for Identification, Optimization, and Training of Artificial Neural Networks


Affiliations
1 21000 Novi Sad, Serbia
 

This paper presents certain stochastic search algorithms (SSA) suitable for effective identification, optimization, and training of artificial neural networks (ANN). The modified algorithm of nonlinear stochastic search (MN-SDS) has been introduced by the author. Its basic objectives are to improve convergence property of the source defined nonlinear stochastic search (N-SDS) method as per Professor Rastrigin. Having in mind vast range of possible algorithms and procedures a so-called method of stochastic direct search (SDS) has been practiced (in the literature is called stochastic local search-SLS).The MN-SDS convergence property is rather advancing over N-SDS; namely it has even better convergence over range of gradient procedures of optimization. The SDS, that is, SLS, has not been practiced enough in the process of identification, optimization, and training of ANN.Their efficiency in some cases of pure nonlinear systems makes them suitable for optimization and training of ANN.The presented examples illustrate only partially operatively end efficiency of SDS, that is, MN-SDS. For comparative method backpropagation error (BPE) method was used.
User
Notifications
Font Size

Abstract Views: 157

PDF Views: 35




  • Stochastic Search Algorithms for Identification, Optimization, and Training of Artificial Neural Networks

Abstract Views: 157  |  PDF Views: 35

Authors

Kostantin P. Nikolic
21000 Novi Sad, Serbia

Abstract


This paper presents certain stochastic search algorithms (SSA) suitable for effective identification, optimization, and training of artificial neural networks (ANN). The modified algorithm of nonlinear stochastic search (MN-SDS) has been introduced by the author. Its basic objectives are to improve convergence property of the source defined nonlinear stochastic search (N-SDS) method as per Professor Rastrigin. Having in mind vast range of possible algorithms and procedures a so-called method of stochastic direct search (SDS) has been practiced (in the literature is called stochastic local search-SLS).The MN-SDS convergence property is rather advancing over N-SDS; namely it has even better convergence over range of gradient procedures of optimization. The SDS, that is, SLS, has not been practiced enough in the process of identification, optimization, and training of ANN.Their efficiency in some cases of pure nonlinear systems makes them suitable for optimization and training of ANN.The presented examples illustrate only partially operatively end efficiency of SDS, that is, MN-SDS. For comparative method backpropagation error (BPE) method was used.