Font Size: a A A

Visual Tracking Under Motion Blur

Posted on:2016-07-17Degree:MasterType:Thesis
Country:ChinaCandidate:Y B LiuFull Text:PDF
GTID:2308330476454985Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Visual tracking is a significant computer vision task which can be applied to many domains, such as human computer interaction, intelligent transportation system, visual surveillance and medical diagnosis. Generally speaking, a typical visual object tracking system is composed of four modules: object initialization, appearance modeling, motion estimation, and object localization. Among these modules, appearance modeling is the most important part. In this thesis, firstly, we use the Fisher vector to model the appearance of target and propose a tracking method based on Fisher vector. Secondly, some structural local descriptors are introduced to model the appearance of target, and a boosting visual tracking method is presented. At last, we do further research on visual tracking under motion blur and then develop a tracking model under motion blur.We propose to utilize probabilistic function to model the statistical distribution of sparse codes, leading to a more generalized feature pooling framework for visual tracking. Immediate matching between two distributions usually requires high computational cost, and instead we introduce Fisher vector to derive a more compact and discriminative representation for sparse codes of visual target, which possesses the merits of both generative and discriminative models. As one instantiation of our proposed method, we encode target patches by local coordinate coding, make use of GMM to compute Fisher vectors, and finally train semi-supervised linear kernel classifiers for visual tracking. Excellent results on a public tracking benchmark demonstrate the validity of the proposed feature pooling approach.We develop a robust boosting-based visual tracking algorithm using structural local sparse descriptors. The local descriptors of object region are represented by pooling sparse codes of some selected local patches, which can retain spatial information to some extent. Generally, feature pooling method ignores the spatial information of local patches and degrades performance greatly when target undergo long-time partial occlusion. The proposed local descriptors, used to model the appearance, can overcome these issues mentioned above. Then, an Adaboost classifier is trained using the local descriptors to discriminate target from background. Meanwhile, the proposed algorithm assigns a weight value, calculated with the structural reconstruction error, to each candidate to adjust the classification result. Comparison with the state-of-the-art trackers on the comprehensive benchmark shows effectiveness of the proposed method.We present a tracking model under motion blur, which estimates blur kernel and sparse coefficient matrix jointly to deal with motion blur during tracking. The estimated blur kernel can reflect the blurred state of current target adaptively and the sparse coefficient matrix can be used to remove the bad candidates. Then, the structural reconstruction errors between candidates and convolutional templates are applied to construct likelihood model. Excellent results on some challenging image sequences demonstrate that the proposed tracking method performs favorably against several state-of-the-art methods under motion blur.
Keywords/Search Tags:visual object tracking, appearance modeling, feature pooling framework, Fisher vector, sparse descriptors, motion blur, blur kernel
PDF Full Text Request
Related items