Font Size: a A A

Group Detection And Contextual Feature Extraction Of Multiple Human Interaction Behavior

Posted on:2017-06-19Degree:MasterType:Thesis
Country:ChinaCandidate:Q ChenFull Text:PDF
GTID:2348330488497033Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Collective activity recognition, as a complex, various and challenging topic, has gained more and more attention by many research agencies in computer vision. Most of the existing approaches treat collective activity recognition as a singular activity performed by most people visible in a scene. However, in many cases, there might exist more than one group in a scene and each group might exhibit a specific activity and serve as the context for each other in real-world scenarios. Based on this observation, this paper presents a novel and efficient framework for collective activity analysis: firstly we find out all the interacting groups in the scene, then we propose a novel group activity contextual descriptor, finally a structural model is utilized to jointly capture the group activity and its activity relationships with its neighbors.In the stage of group detection, people in a scene can be intuitively represented by an undirected graph where vertices are people and the edges between two people are weighted by how much they are interacting. The degree of interaction of standing still and moving people is quantization based on the Social Distance Model(SDM) and Social Force Model(SFM) respectively. At last, based on the undirected weighted graph, we propose a new method to discover interacting groups which is inspired by the Split-Merge algorithm applied to the research field of image segmentation. The grouping of people in the scene serves to isolate the groups engaged in the dominant activity, effectively eliminating dataset contamination.Using discovered interacting groups, we propose a novel approach to model both the intra-group and inter-group behavior interactions among groups in the scenario. Due to that context information has been widely studied for recognizing collective activities, we create a view-invariant contextual descriptor for each group. Besides, by introducing the intra-group and inter-group context descriptors, we propose a unified structural model to jointly capture the group motion information and the various context features. Finally, a greedy forward search method is utilized to optimally label the activities in the testing scene.The proposed framework is evaluated in its ability to discover interacting groups and perform group activity recognition using two public datasets. The results of both the steps show that our method outperforms state-of-the-art methods for group discovery and achieves recognition rates comparable to state-of-the-art methods for group activity recognition.
Keywords/Search Tags:Collective activity, Structured model, Contextual feature, Group detection, Split-Merge, Social Distance Model, Social Force Model
PDF Full Text Request
Related items