Font Size: a A A

Research On The Neural Network With Sparse RAM And Its Application To Face Recognition

Posted on:2003-09-10Degree:DoctorType:Dissertation
Country:ChinaCandidate:H J PengFull Text:PDF
GTID:1118360092475974Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
The memory-based neural networks, such as N-tuple neural network (NTNN) and Sparse distributed memory model (SDM) etc., have attracted great attention by virtue of their simple architectures for easy hardware implementation, of lookup tables algorithm for fast operation. Their successful applications in many areas make them act as the basis of a commercial product. It is just their "weightless" or "RAMnet" architectures that results in poor nonlinear map, and at the same time few theoretical analyses about their behaviors have been gotten, so that further applications are limited. This dissertation starts by modifying their structures and algorithms for improving their performances. The extensive studies of analyses for learning ability, comparisons with related models and face recognition applications confirm that our generalized models are feasible and effective. Many good properties are obtained that original models can't have, the scope of applications has been expanded from binary pattern recognition (if not, need to be converted to binary string) to function approximation and gray face recognition, all these benefit from the novel models' abilities to dealing with real vector inputs directly. Several contributions have been made as follows:1. We generalize a class of regress NTNN with RAM and present a novel adaptive pattern recognition system桝 N-tuple neural network model with sparse RAM that can be applied to pattern recognition as well as function approximation task. The increased adjustable parameters make new model flexible, cut down memory requirement and restrain NTNN's defect of easy saturation. It is a general model to some extent in which both NTNN and SDM can be regarded as a special case. Finally, experiments have shown its ability of approximating functions.2. The approximate SDM has been presented by modifying original SDM's structure and algorithm, retaining original SDM's characteristic of sparse distribution. It exceeds original SDM in application for original SDM is only applied to the associate memory. The theoretical analysis about the novel SDM shows that its learning ability is in common with CMAC while both quantification manners aren't the same. Furthermore, no block effect appears and Hashing technology is not used in new model but the reverse in CMAC. Theoretical analysis and example have shown this improved model effective and reasonable, better performance in function approximation than CMAC does.3. SLLUP(single-layer lookup perceptrons) as well as classical N-tuple classifier is extensively used in many regions because of their simple architecture, fastoperation and easy realization to hardware. At the same time, this kind of architecture based on RAM results in that input sample need to be converted to binary vector. Consequently, their applications in large dimension sample recognition are limited. Therefore, the approximate N-tuple model based on sparse RAM has been presented by integrating SLLUP with sparse distributed memory, SLLUP as well as approximate SDM becomes a special case there. This is not simple generalizing architecture, since it is really possible that the novel model can deal with large dimensional samples directly when sparse address code is immediately real vector one, that is, N-tuple sampling can be operated on input examples directly that can't in SLLUP. Function approximation experiment demonstrates that this new model can get better performance than CMAC and approximate SDM by selecting parameters appropriately.4. There is a kind of networks, such as CMAC, approximate type SDM, SLLUP and approximate type N-tuple network based on sparse RAM, titled a general memory neural network (GMNN). It consists of: input space quantification, memory address generator, combined output by memory lookup operations. The essence of its operation, analogous to kernel method, is that a nonlinear map can be achieved by address selection operation leading to a higher dimension space, so that better classification or regression performance can be obtaine...
Keywords/Search Tags:N-tuple network, sparse distributed memory, single look-up perceptrons, CMAC, learning convergence, face recognition
PDF Full Text Request
Related items