Font Size: a A A

Improving the Performance of Neural Networks through Parallel Processing in the Cell Broadband Engine

Posted on:2011-12-20Degree:M.A.ScType:Thesis
University:Carleton University (Canada)Candidate:Boiko, YuriFull Text:PDF
GTID:2448390002954766Subject:Engineering
Abstract/Summary:PDF Full Text Request
This Thesis focuses on the exploration of parallelization approaches for improving the performance of ANN. A main goal of this Thesis is to define the routes for the parallel computation of this problem using the multi-core Cell Broadband Engine. In particular, a new design for parallel tracing of the gradient descent algorithm showed the feasibility for efficient finding of viable solutions for the approximations of 20 non-linear functions and the predictions of 10 time series by neural networks. One objective was to identify the parameters of the gradient descent algorithm which can be used for parallelization of the tasks in 20 function approximation and 10 time series prediction in terms of speed and accuracy of the delivered solutions, and to obtain fast convergence to the optimal solutions. Specifically, for a 20 function approximation, the entrapment in the local minima has been addressed via parallel tracing of the converging trajectories, while verifying the optimality of the solutions. For a 10 function approximation, the original task involves multiple-input-multiple-output multi-dimensional neural networks and thus is challenging for the gradient descent algorithm, posing problems of speed and convergence. In this case, the goal was set to verify the efficiency of the splitting of the multiple-steps forecasting task into several sub-tasks with various forecasting horizons in order to achieve fast and accurate forecasting solutions. The sub-tasks with various forecasting horizons extracted from the complex task would require a simpler type of multiple-input-single-output neural networks. The objective was to demonstrate the improved efficiency of such approach in parallel computing environment to reach fast and accurate solutions with the gradient descent algorithm.
Keywords/Search Tags:Parallel, Gradient descent algorithm, Neural networks, Solutions
PDF Full Text Request
Related items