Font Size: a A A

Research On GPU Based Spiking Neural Network Learning

Posted on:2016-08-24Degree:MasterType:Thesis
Country:ChinaCandidate:L J LiFull Text:PDF
GTID:2348330479454699Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Spiking neural network(SNN), which represents and processes information with spike series composed of accurate times, is a powerful and effective tool for complex spatiotemporal information processing. In the important SNN research field, supervised SNN learning, researchers have proposed a variety of algorithms and achieved very good learning performance. However, the learning speed is a major problem to be overcome if the SNN is to be used widely in real world applications, especially in application with tremendous dimension and extremely large scale data.Exploiting SNN's nature parallelism, studying parallelization method for its training, enhancing training speed for large scale SNN are the major objectives of this projects. The SNN speedup can be simulated on super-computers, computer clusters, Graphics Processing Units(GPU) or dedicated hardware architectures such as FPGA. Among them, the GPU is a normal resource of a computer system, which has high performance price ratio,low power consumption, high parallelism, and extremely powerful computing ability. Therefore, GPU should be used in this project to speed up the training for large scale SNNs.In the Compute Unified Device Architecture(CUDA), the SNN parallel computation platform is constructed by using Center Process Unit and GPUs of Personal Computers(PC). In the platform, the SNN is mapped onto an array of GPUs by using Neuronal-synaptic parallelism(NS-parallel), and several kernels are implemented to carry out following functions: computing neuron input current, finding firing neurons, propagating spikes to target neurons, and adjusting synaptic weights. The Multi-ReSuMe algorithm, which is suitable for multilayer feedback network structure, is used to design and implement the GPU based SNN supervised learning. Through sparse representation of SNN parameters and improving spike event representation method, the memory consumption in GPU is reduced. To enhance the coefficient of GPU utilization,a local code buffer is adopted to minimize the adverse impact of the GPU thread bunch forking.Various scale SNNs are simulated in CPU and GPU modes. The running time and speedup ratio are compared against under the two modes. To build and train the SNN in the above two modes, two different scale data sets(Iris and duke breast-cancer) are chosen as the training data sets. The experiment result shows that the GPU based SNN has excellent performance for large-scale network training, and the training speed has been greatly enhanced compared to the sequential algorithms.
Keywords/Search Tags:Spiking neural network, parallelism, Graphics Processing Units, Supervised learning, ReSuMe algorithm
PDF Full Text Request
Related items