Font Size: a A A

A Research Of Human Action Recognition Of Depth Maps Based N Space-time Interest Point

Posted on:2015-04-16Degree:MasterType:Thesis
Country:ChinaCandidate:Q XiaoFull Text:PDF
GTID:2298330422490109Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
With the development of computer science, increasingly intelligent devices will come intopeople’s ordinary life. Traditional interative mode which contains keyboard, mouse and otherold devices has been unable to meet the needs of human-computer interaction, which need tobe more natural, convenient and intelligent. The somatosensory interaction as a very naturalhuman-computer interaction has achieved broad prospects in the somatosensory games, controlof mobile terminal, and other aspects of the computer control.Action Recognition is a significant research area in human computer interaction (HCI),which is able to recognize human action through analysis of motion and posture and has beenextensively used in augmented reality and interactive somatosensory. Conventional researcheson action recognition are mainly focused on extracting features from RGB camera, which isdifficult for human detection and tracking. Their researches achieved a relatively low accuracyof recognition in the case of complex background and varied illuminant. We propose a novelapproach for action recognition based on depth images, implemented by fusing temporal andspacial information, local and global features together.Behavior recognition based on space-time interest point can be divided into the followingsections, including image preprocessing, detection of interest points, extraction of featuredescriptors, feature representation, and action recognition. According to the dividing of featurerepresention, the study based on bag-of-word models and bag-of-feature model does in-depthresearches.Based on bag-of-word model, we first applied the HOG3D feature to depth maps. Becauseof the disorder vocabulary in bag-of-word model, it lost the position information of interestpoint, which leads to unability of globally describing the huan action. So we propose arecognition framework with fusion of local and global feature where ultilizes thecomplementary features to greatly improve the reliability of the recognition system.Based on bag-of-feature model, we first normalize the space-time interest points accordingto the position of human skeleton to build up a high-dimension tree. In the next phase, wesearch their neighbors with normalized location of interest points and compare local featureswith them. The framework of our proposed method has the following advantages: recognitionof multiple actions, fusion of multiple features, and recognition of action in frame by framemodel, incremental learning of new action samples and application of position information of space-time interest points to improve performance simultaneouslyThe proposed approach is tested on publicly available MSRAction3D dataset,demonstrating the advantages and the state-of-art performance of the proposed. Theachievement of this study will be applied to a variety of human-computer interaction scenarios.
Keywords/Search Tags:action recognition, depth maps, fusiong of multiple features, high dimensionindex tree
PDF Full Text Request
Related items