Nowadays as a high developing age,information become more and more vital for everyone in the society,how to verify a person's identity correctly and protect the person's information is a crucial problem. Biometric identity systems which were developed recently seem to be in place of traditional identification such as key,password and ID cards due to their security,convenience,quick response performance.However,most developed biometric identity systems are just use singular feature to finish identification,which so called unimodal biometric systems.Because every biometric system has its own limitations:such as error in feature extracting, pattern mismatch,sensors noise makes application of such identify technology have lots of limitations.To combat these limitations,a system which based on neural networks and data fusion through more than one biometric at the same time and get a integrate information for identity which is known as a multimodal identification system should be adopted. Multimodal recognition is therefore acknowledged as a mainstream research area as the next generation of biometric personal recognition.This thesis starts with two unimodal biometric systems based on face images and speech signals. Based on the research before a very detail introduction about these two technologies is presented; furthermore, we do some improvement on them.According to these work and research we extract the features of both face images and speech signals, and we developed the identity recognition system based on perception neural networks and BP network. By doing real-data simulative experiments,the experiment's result of fusion system is better than singular feature identity recognition. |