Font Size: a A A

Cross View Gait Recognition Based On Dynamic Feature

Posted on:2020-06-19Degree:MasterType:Thesis
Country:ChinaCandidate:W Z AnFull Text:PDF
GTID:2428330590978665Subject:Computer technology
Abstract/Summary:PDF Full Text Request
The gait feature is a kind of biometrics,which has the characteristics of distance acquisition,non-contact and not easy to camouflage.There have been mainly two different categories of gait methods along with the steady progress of gait recognition.The appearance-based is one of the most popular methods in the gait recognition task,and some methods have reached relatively high recognition rates with cross-view variation,cross-clothing variation and so on.Nevertheless,human silhouettes are sensitive to view angles,clothing variations and the carrying conditions.The second category of gait recognition method is the model-based method.The model-based methods model human body structure and local movement patterns of different body parts.It is generally robustness to the cross view and shape changes.However,most model-based methods generally need to manually mark the data or use specific device to obtain the human joint information.It is really a challenging task to localize human body joints accurately.In order to solve these two challenges,we propose a novel PoseGait gait recognition method which is robust in the clothing and carrying conditions.The 3D pose has its unique advantage on view and other variations since only body joints are involved and the joints are in a 3D space.Handcrafted temporal-spatial features from 3D pose by human prior knowledge also be designed to improve the recognition rate.The method is evaluated on two large datasets,CASIA B and CASIA E.The experimental results show that the proposed method can achieve state-of-the-art performance and is robust to view and clothing variations even only a concise CNN model is used.It is also convincing that the model-base gait recognition will be improved rapidly with the improvement of human modeling in future.To extract invariant gait features,we proposed a method called GaitGANv2 which is based on generative adversarial network(GAN).In the proposed method,a GAN model is taken as a regressor to generate a canonical side view of a walking gait in normal clothing without carrying a bag.A unique advantage of this approach is that,unlike other methods,GaitGANv2 does not need to determine the view angle before generating invariant gait images.Indeed,only one model is needed to account for all possible sources of variation such as with or without carrying accessories and varying degrees of view angle.The most important computational challenge,however,is to address how to retain useful identity information when generating the invariant gait images.The proposed GaitGANv2 represents an improvement over GaitGANv1 in that the former adopts a multi-loss strategy to optimize the network to increase the inter-class distance and to reduce the intra-class distance.Experimental results show that GaitGANv2 can achieve the state-of-the-art performance.In order to further improve the performance of gait recognition under occlusion variation,and to extract dynamic temporal features more effectively,this paper proposes to use the correlation between frames to improve the gait recognition algorithm.The innovation of this method in this paper is different from the method of temporal feature extraction currently adopted by other methods.The correlation between the frames of the input sequence is used as the temporal feature,and the correlation between frames can also be reduced.In order to verify the validity of the method,we also tested on the CASIA B gait dataset and the OU-ISIR dataset.The comparison with the experimental results of other gait methods shows that this method can effectively improve the gait recognition performance.
Keywords/Search Tags:Biometric Feature, Gait Recognition, Temporal Feature
PDF Full Text Request
Related items