Font Size: a A A

Motion Data Segmentation And Applications

Posted on:2019-03-04Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q YuanFull Text:PDF
GTID:1368330566487048Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the progress of the technique of sensors,kinds of dynamic data are becoming easier to acquire than before.At the same time,to understand and segment data has become a hot topic in computer graphic and computer vision.In this paper,we study the motion segmentation methods on dynamic data and its applications.Specially,three topics are researched in this paper,including motion objects segmentation in images and video,near-rigid co-segmentation on articulated point cloud sequences and introducing a new stop motion production system based on the motion segmentation in video and images.We propose a local-to-global approach to Co-segment point cloud sequences of articulated objects into near-rigid moving parts.Our method starts from a per-frame point clustering,derived from a robust voting-based trajectory analysis.The local segments are then progressively propagated to the neighboring frames with a cut propagation operation,and further merged through all frames using a novel space-time segment grouping technique,leading to a globally consistent and compact segmentation of the entire articulated point cloud sequence.Based on the fully connected convolution neural networks on motion video segmentation methods,we introduce a hand detection and precise segmentation on video for a specific application situation.We first use a large labeled-hands images dataset to train a FCNs,and then using this networks to segment the hands in an image with a rough result.Second,using the rough segmentation as input,we build a condition random field to encode the similarity between adjacent pixels,and using the information to obtain an accurate hands region in image.Based on the method of hands detection and segmentation for videos and images,we introduce a new method to produce a flying style stop motion animation.Firstly,we present a handsswapping way to capture the original motion video,the animator can directly manipulate the motion objects by hands and swap the hands during key instances.Secondly,we split the captured video into a set of hand-swapping events and select a pair of key frames automatically such that they can complement each other with information occluded by hands.At last,we use an optical flow based propagation method to synthesize a key frame from each key frame pair.To show the effectiveness of the proposed algorithms,we have performed a variety of experiments.For co-segmentation work,our results show that this progressive propagating and merging,in both space and time dimensions,makes our co-segmentation algorithm especially robust in handling noise,occlusions and pose/view variations that are usually associated with raw scan data.Experiments demonstrate the ability of our FCNs + CRF method to segment the hands in variant situations.We also built a large hand segmentation dataset with ground truth labeling.In our new method of stop motion,the experiments show that our system can produce high quality animations even for an amateur.
Keywords/Search Tags:Segmentation, Video, Point cloud sequence, Stop motion, Deep learning
PDF Full Text Request
Related items