Font Size: a A A

Behavioral Image Region Based Action Recognition Research

Posted on:2017-05-13Degree:MasterType:Thesis
Country:ChinaCandidate:J J ShenFull Text:PDF
GTID:2348330503482444Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
Action recognition in still images has become an active research topic in computer vision and pattern recognition in recent years. Some action categories can be depicted clearly in single images, it focuses on identifying behavior from a single image. We construct a new dataset with seven common daily actions. This paper study behavior image region based action recognition in still images.First of all, this paper studied the action recognition algorithm based on DPM model. The traditional way of action recognition often use the bounding box of human, we only select the region associated with behavior, called behavior image region. According to the behavior characteristics and sample distribution, we set the number of model parts and the behavior perspective, and we discriminatively train the action models. Considering recall rate and precision parameters of action models under different thresholds, we choose the best threshold for each action model. From the behavior model to the image prediction of the human body bounding boxes, we select the highest score as the final action representation.Secondly, this paper studies the action recognition algorithm based on weighted fusion of behavior core region. We only select the essential part of behavior, called behavior image core region. Using the deformable part model to train the action models. We combine the image matching scores of the two kinds of action models as the final action representation, and make the action models include both the structure of local information and global information.Finally, this paper studies the action recognition algorithm based on distributed representation of behavior image region. We use a new notion of parts, called poselet, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. We divide the human body into more detailed parts and use keypoint annotations information to search image patches that have similar configuration for each poselet, and we use linear SVM to train the action models. Each poselet depicts the behavior characteristic from different angles, the more poselets, the more comprehensive description of the behavior. We sum the image matching score of poselet that meets the threshold condition, and use the score as the final action representation.
Keywords/Search Tags:action recognition, behavior image region, behavior image core region, deformable part model, poselet
PDF Full Text Request
Related items