Font Size: a A A

Incremental Recognition System Based On Ganglia Differentiation

Posted on:2018-11-28Degree:MasterType:Thesis
Country:ChinaCandidate:J J HuFull Text:PDF
GTID:2348330512498297Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Thanks to the burst of DNN(Deep Neural Network)methods and the arrival of large data era,machine learning has revolutionized the field of computer vision,bioinformatics,and natural language processing in the past decade,but the traditional structure of DNN remains unchanged in the training process.The solidification of the network structure results in the limited capacity of the DNN system,and can't meet the actual requirements of learning and expressing online new samples which are produced continuously.Existing DNN requires entire collected dataset to train previously.This means once the training is over,the network no longer absorbs any further information.It's necessary to change all parameters and retrain the model when new samples arrive,which not only destroys the existing knowledge,but also result in huge memory consumption and time consumption.In this paper,an incremental recognition method based on ganglia differentiation is proposed,which has the ability to accommodate online data and can be utilized in the process of incremental growth.Combining the traditional DNN and clustering algorithm,the method adjusts the network structure adaptively while maintaining superior learning ability,and learns new knowledge from new samples incrementally.In this paper,cluster analysis technology and incremental neural network are studied and discussed in detail.The main contributions could demonstrated as follows:1.A neural network model of structural plasticity is proposedThe model proposed in this paper is composed of multi-layer RBM(Restrict Boltzmann Machine)and ganglia differentiation layer,resulting in a branched structure.The raw data with a large amount of redundant information is compressed and encoded into three-dimensional feature maps at the lower hidden layers and their destinations are determined in the ganglia differentiation layer.The ganglia differentiation layer is located between the lower hidden layers and the higher hidden layers,and the characteristics of feature maps is extracted there.Through the activation and differentiation in ganglia layer,ganglia are added or removed automatically to determine the fate of different samples.Ganglion here refers to a cluster of neural network nodes characterizing samples with similar distribution rules.When some ganglion is activated by input sample,the sample is added to the independent sample-set corresponding to the ganglion in the higher hidden layer.Different samples activate different ganglia,updating the corresponding sample set of higher hidden layers adaptively.The following higher hidden layers learn higher level features,and the independent sample-set forms characteristic memory.The number of feature sets of the higher hidden layer is variable,and the number equals to the number of ganglia.2.A clustering structure based on deep learning is proposedThe application of incremental learning algorithm to deep neural network is a new research intension of deep learning.However,the uncertainty and instability of incremental learning algorithm often seriously disrupt the classification accuracy and result in counterproductive consequences.There is a comparatively little work involves incremental algorithms in unsupervised tasks for large-scaled images,and one important reason is that redundant information within pixels is frustrating to be clustered automatically.In this paper,the compacted feature maps are fed to the ganglia layer rather than the raw pixel-level images,which ensures the incremental characteristic of the new network.3.A distributed training strategy based on local features is proposedThe purpose of the ganglia differentiation layer is finding out inherent law of feature maps,trying to distinguish them and training respectively.But this is a nearly impossible task,considering that the very deep DNN-based classification method can't make it now,not to mention the input data of the ganglia differentiation layer is only after a shallow coding.Thus,different samples of same category may activate different ganglia and enter different branches for higher level feature extraction.In this paper,we propose a distributed training strategy based on local features.Different samples of the same category are distributed in different branches to train corresponding higher-level local features.All parts of each sample may be reflected in different branches during the test period.Let's consider the worst case,that is,the branched structure can't distinguish inputs,in other words,kinds of inputs are randomly fed into the higher level network,which is equivalent to the traditional neural network input pattern.This is an extreme case,and the learning ability of the model is surely not worse than traditional neural network so long as the ganglia layer has somewhat ability to distinguish samples.4.A memory protection method for suppressing noise samples is proposedFor a traditional end-to-end deep neural network,the network needs to update all parameters for each new input sample.If the new sample is noisy,the existing network fits the noise indiscriminately,thus destroying the existing network.For the incremental deep neural network proposed in this paper,if the new sample is noisy,the new sample is marked as noise in the ganglia differentiation layer and will not enter any cluster or further be used to train higher-level features.If the noise sample is not successfully marked,it is only necessary to update the local parameters of the corresponding,ganglion for each new input sample.The effect of noise can be minimized and the robustness of the system is enhanced.
Keywords/Search Tags:incremental learning, object classification, deep neural networks, computer vision, CFSFDP
PDF Full Text Request
Related items