Font Size: a A A

Research On Key Technologies Of Video-Based Human Behavior Recognition

Posted on:2015-06-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:C L YuFull Text:PDF
GTID:1108330422492419Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Video-based human behavior recognition is one of the most popular research di-rections of video analysis and understanding, which has broad application prospects in human-computer interaction, video surveillance, virtual reality, and motion analysis. The research content of video-based human behavior recognition mainly includes several as-pects:feature representation and extraction, feature fusion, and behavior recognition. And the aim of this study is that existing computer technology is used to enable the ma-chine having the ability to identify, analyze, and understand human behavior like human beings. Although human behavior recognition technology based on video has made con-siderable progress and development, people still are facing some problems, such as how to efficiently and accurately obtain human behavior features, and how to reduce the hu-man behavior dimensionanlity. In order to solve the above problems, at the same time, to consider the multiple features method can effectively solve the problem that expression accuracy of the single feature is not high, robustness is not poor, this article explores the static and dynamic feature fusion method automatically from the video to improve the recognition performance with contextual relevance of continuous frames.The human behavior is used as the object of research in this paper. Feature rep-resentation, feature fusion and behavior classification in the video are mainly studied, furthermore, human behavior static and dynamic features fusion algorithm of continu-ous frames, human behavior feature extraction algorithm based on grid quantizing, depth feature extraction based on video and abnormal human behavior recognition are explored.This thesis proposed a human behavior depth features extraction method based on the human shape and motion vector. The method is simply implemented and need not use the camera calibration and internal and external parameters calculations when depth features are generated. Due to some reasons such as noise and small amount of human motion key points, some feature values will be missed in disparity map. In order to esti-mate these values, repairing method based on the averaging value of the edge is proposed. Moreover, we compare features extracted by this method with the actual depth features of the DHA dataset, the result shows that two methods are roughly the same recognition performance, and this method can gain a good recognition rate under the condition of no depth acquisition device. In addition, in order to solve the problem that the feature discrimination among spatio-temporal spaces is not high, this paper proposed a human behavior static and dy-namic features fusion algorithm of continuous frames. Firstly, three types of features are selected from spatio-temporal feature extraction framework and used to describe human behavior. The static feature selects invariant moment descriptor of the action contour with the properties of scale invariance, translation invariance and rotation invariance, and shape feature representing global and local action characteristics. The dynamic feature selects optical flow features representing action dynamic changes. Secondly, since there have certain contextual relevances among the current frame and continuous frames, a weighted averaging method is used to fuse respectively shape radial histogram and optical flow ra-dial histogram of the current frame and continuous frames, which can not only enhance the description capability of human behavioral characteristics in the space-time domain, but also effectively reduce the adverse impact of certain distortion point of interest for tar-get recognition. In order to integrate action features among different space-time domains, to reduce redundant information between these features, or to reduce dimensionality of the feature space, K-L transform method is used to integrate shape radial histogram and optical flow radial histogram, the result shows that discriminable ability of human behav-ior can be improved using three types of features and fusion method based on continuous frames, furthermore, human behavior recognition rate is increased under the same order of feature magnitude.Next, this paper proposed a human behavior feature extraction algorithm based on grid quantizing. This method can get rid of redundant information, and can effectively reduce the feature dimension space. Furthermore, the method can rapidly classify the human behaviors with key frames and DTW method.Last, an improved abnormal human behavior recognition algorithm based on the encoding optical flow features and MRF model in complex environments is presented. Firstly, the optical flow angle feature is encoded to get human behavior motion bag of vi-sual word. Then frames of the video sequences are divided into a number of blocks as the nodes of the MRF model. These blocks are segmented into smaller sub-blocks to obtain the feature descriptors of the block. Finally, combined with the space-time characteristics of the video, the energy function of the MRF model is calculated to determine whether there is the abnormal human behavior.
Keywords/Search Tags:depth feature, feature extraction, feature fusion, grid quantizing, human be-havior recognition
PDF Full Text Request
Related items