Font Size: a A A

Human Action Recognition Algorithm Based On "Multi-view"

Posted on:2016-10-24Degree:MasterType:Thesis
Country:ChinaCandidate:J M SongFull Text:PDF
GTID:2308330461983628Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
In computer vision, human action recognition is a challenging problem, so that many researchers have put a lot of effort into developing various methods to tackle the problem. However, majority of human action recognition methods are based on single feature, single view or single modality. These methods are vulnerable toward luminance, light change, occlusion, shade and other environmental factors, which leads to the degradation of recognition performance. In recent years, visual data acquisition is becoming more and more abundant due to improvement on visual surveillance and sensor technology. With the developing of machine learning and hardware technology, research becomes more and more focus on multi-view action recognition and using it to solve the drawbacks of traditional algorithm.In order to solve the drawbacks of human action recognition methods, in this dissertation, we have proposed a novel human action recognition method based on multi-view. The main contributions as follows:(1) A novel algorithm based on multi temporal-space feature is defined as the description of actions. Since it uses smaller code-book, better performance is exhibited and real-time running is guaranteed.(2) To describe human action in depth, we have proposed a new method c alled dense space-temporal points based on depth data, which is combined with optical flow and trajectory tracking to extraction interesting points in depth data, and HOG/HOF descriptors for action description. The experimental results show our method out-performs the state-of-art methods.(3) We produce a new data set which includes RGB and depth data, then test various evaluations on the dataset and XIMAS dataset, and discussing the influence of multi-view data for action recognition from different view.The experiments taken on KTH, Youtube Action, DHA, MSR Action3 D and UTKinect dataset, prove that our methods are more robust, discriminative and stable.
Keywords/Search Tags:Human action recognition, Depth information, Kinect, RGBD, Feature fusion, Multi-temporal-space interest points
PDF Full Text Request
Related items