Font Size: a A A

Research On Human Action Recognition Based On Space Time Interest Points And Bags Of Word Model

Posted on:2018-04-19Degree:MasterType:Thesis
Country:ChinaCandidate:F WuFull Text:PDF
GTID:2348330518993017Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Human actions recognition has been recently applied in the fields of human-computer interaction,intelligent monitoring,and virtual reality.Because of not sensitive to background noise,low computational complexity and high robustness,the bags of word model based on space time interest points has been widely research.Visual dictionary is essential for action recognition with bags of word model.But the traditional information gain for the feature selection of the visual dictionary decreases the recognition accuracy because of not considering the influence of term frequency.Besides,the original video frames contain a lot of redundant features of human action.Therefore,the selection of the key frames from the initial video,which includes the key features of human action and reduces redundant information for action recognition is significant.We extracted space time interest points based on 3D Harris,used HOG3D and HOF feature descriptors for interest points and reduce dimensions by PCA.We improved information gain by introducing term frequency that was not considered for the feature selection in the traditional information gain to construct visual dictionary.We proposed a key frames select algorithm based on discrete particle swarm,which angle cosine value is used as an evaluation criterion of key frame selection.Based on Matlab2014b,we verify the proposed method with two databases of human actions recognition.Results show that the visual dictionary of human actions based on improved information gain selects the most discriminate visual words and improve the accuracy of human actions recognition.The key frames selection algorithm proposed in this paper can reduce the number of action video frames and ensure action recognition accuracy.The action recognition accuracy for the KTH action dataset and Weizmann action dataset with our method are 89.1%and 98.89% respectively.
Keywords/Search Tags:Action Recognition, Space Time Interest Point, Bags of Word Model, Information Gain, Key Frames
PDF Full Text Request
Related items