Font Size: a A A

Training and optimizing distributed neural networks using a genetic algorithm

Posted on:2011-09-15Degree:Ph.DType:Dissertation
University:Nova Southeastern UniversityCandidate:McMurtrey, Shannon DFull Text:PDF
GTID:1448390002951390Subject:Computer Science
Abstract/Summary:
Parallelizing neural networks is an active area of research. Current approaches surround the parallelization of the widely used back-propagation (BP) algorithm, which has a large amount of communication overhead, making it less than ideal for parallelization. An algorithm that does not depend on the calculation of derivatives, and the backward propagation of errors, better lends itself to a parallel implementation.;One well known training algorithm for neural networks explicitly incorporates network structure in the objective function to be minimized which yields simpler neural networks. Prior work has implemented this using a modified genetic algorithm in a serial fashion that is not scalable, thus limiting its usefulness.;This dissertation created a parallel version of the algorithm. The performance of the proposed algorithm is compared against the existing algorithm using a variety of synthetic and real world problems. Computational experiments with benchmark datasets indicate that the parallel algorithm proposed in this research outperforms the serial version from prior research in finding better minima in the same time as well as identifying a simpler architecture.
Keywords/Search Tags:Neural networks, Algorithm, Using
Related items