Font Size: a A A

Research On Deep Unsupervised Learning Algorithm

Posted on:2016-09-30Degree:MasterType:Thesis
Country:ChinaCandidate:Y P YueFull Text:PDF
GTID:2208330467999677Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Deep learning, a new development of artificial intelligence technology, has been widely used in computer vision, speech recognition, machine translation, semantic mining and other fields. Even MIT Technology Review magazine put deep learning as the first of ten major breakthrough technologies of2013in2014. Although deep learning has made a series of great research achievements, it also faces the challenges of theoretical calculation, the representation of high dimensional feature space and unsupervised learning problems. During the process of detail deep learning approach that put forward by Hinton et al in "Science" journal in2006, the dissertation established the mathematical optimization model of deep supervised learning and deep unsupervised learning from the perspective of mathematical optimization, given the method of deep network parameter learning that based on the auto-encoder of greedy-layer-wise unsupervised learning, deduced the mathematical formula of gradient instability in deep supervised learning, combined the big data processing technology. All the work and achievements may provide theory and method guidance for people to think about and solve the "deep learning". The major work and the results are as follows:(1) Based on the natural revelation of the human visual information processing system that deep network has more power of expression than shallow network, the dissertation firstly analyzed the motivation to establish the deep network and introduced the architecture of deep network.(2) Starting from traditional shallow network parameters learning, the dissertation established the mathematical optimization model of deep supervised learning with gradient descent and the error back propagation algorithm, verified the effect of the deep supervision model using the handwritten digit database MNIST in numerical experiments I. Results show that the pure supervised is not suitable for deep network parameters learning.(3) Based on the failure reason analysis of using supervised learning method for deep network parameter learning, the idea of step by step greedy unsupervised was introduced to overcome gradient instability, expensive supervised learning data acquisition and sensitive to the initial value of gradient descent algorithm and deep network parameter learning based on the encoding of greedy layer-wise unsupervised was put forward.(4) Using the same MNIST handwritten digital database for numerical experiment II to validate deep network parameter learning method proposed in the (3), the comparison of its result and the result of numerical experiment I indicates that deep network parameter learning method based on the encoding of greedy layer-wise Unsupervised can effectively improve the classification accuracy of the test data set.(5) Inspired by the biology research findings that mass data will make the learning effect of learning algorithm more perfect about "the baby how to know the world", deep unsupervised learning network of large scale data was researched in the "complex model and big data" mode. Using Mini_Batch gradient descent algorithm and incremental gradient descent algorithm to optimize the algorithm, and the PyCUDA parallel computing steps of gradient descent algorithm and the Map-Reduce decomposition course of gradient descent algorithms were proposed in the data parallel.
Keywords/Search Tags:Machine Learning, Deep Learning, Deep Unsupervised Learnin, ParallelComputing, Learning of Large Scale Data
PDF Full Text Request
Related items