Font Size: a A A

Research On Parallel Sparse Deep Belief Networks

Posted on:2019-02-06Degree:MasterType:Thesis
Country:ChinaCandidate:X Y BaiFull Text:PDF
GTID:2428330572458918Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
In recent years,artificial intelligence and deep leaning have made great progress.Deep neural networks have been successfully applied in various fields such as pattern recognition and computer vision.The rapid improvement of learning ability in deep learning algorithm has benefited from the layer-wise learning algorithm.The Deep Belief Network model,which composed of the Restricted Boltzmann Machine,is considered to be one of the most effective deep learning algorithms.However,the restricted Boltzmann machine may produce redundant features without any constraints.It is important for artificial intelligence to learn abstract features and make the model have better generalization performance.This paper optimizes the training algorithm on the basis of deep belief network and proposes a multiobjective optimization model based on classical restricted Boltzmann machine training algorithm.We employed evolutionary algorithm to learn sparse features and reduce the feature redundancy.Then it proposes parallel implementation of sparse DBN on GPU to reduce the training time of sparse deep belief network.The details about the work are as follows:(1)Firstly,aiming at the over-fitting problem,This paper proposes a sparse restricted Boltzmann machine training algorithm based on multiobjective optimization algorithm to learns more hierarchical representation and sparse features.It employs evolutionary algorithm to optimize the distortion function and the sparsity of hidden units simultaneously and avoid user-defined constant that is a trade-off between the regularization term and the reconstruction error.To increase the diversity of the population and convergence ability,It adds the quantum mechanism in the algorithm.In experiments,the sparse features are used for image recognition problems and sparse deep belief network achieve better performance.Experimental results on MNIST and CIFAR-10 dataset show that our novel approach can learn useful sparse feature without a user-define constant and it performs better than other feature learning models.(2)Aiming at the high complexity of the training deep belief network model,a parallel sparse deep belief network based on GPU is proposed.The excellent float computing ability of GPU can improve the classification performance in deep learning problems and can achieved the accuracy results.This paper proposes a parallel implementation of sparse deep belief network on GPU to decrease training time and learn sparse features to achieve better classification results.Experimental results show that the parallel algorithm achieved significant speedups over previous CPU implementation.(3)Finally,this paper applys parallel sparse deep belief network to the practical problems of facial expression recognition.Facial expression is the reflection of inner emotion,expression recognition is one of the important step in human machine interaction.The key of facial expression recognition problem is that model extract higher level of abstract features from the face image.In the past,the extracted feature method based on manual has uncertainties and easily leads to the loss of information.In order to solve this problem,the excellent expression of non-linear function of sparse restricted Boltzmann machine is used to learn sparse features from original samples.Experiments on JAFFE and CK+ datasets show that it improves the recognition performance in facial expressions problems.
Keywords/Search Tags:deep belief network, quantum multiobjective optimization, GPU, sparse restricted Boltzmann machine, expression recognition
PDF Full Text Request
Related items