Font Size: a A A

Research On Visual Object Tracking With Adaptive Multiple DeepBoost Models

Posted on:2017-12-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:J WangFull Text:PDF
GTID:1318330482494232Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
As a fundamental part of computer vision, visual object tracking (VOT) is a popular studying direction in both academic and industrial community. Although significant progresses have been achieved in recent years, the tracking failure issue raised by complex and severe object appearance variations is still not solved. The object appearance variations are mainly from the situations like illumination changes, deformations, occlusions. In addition, the lack of recovery ability from the model drifting and tracking failure also degrades the tracking performance. Furthermore, the performance of moving shadow detection in the object discovering part can also affect the tracker's initialization accuracy in vedio surveillance. The mentioned problems prevent the tracker from realizing long-term accurate persistent tracking. In this paper, we focus on alleviating these problems:We propose a novel discriminative and adaptive tracking-by-detection method based on online DeepBoost learning (DeepBoost-tracker, DBT) to improve the ability of tackling with severe object appearance variations. The proposed algorithm adopts a flexible and capacity-conscious object appearance model, which combines the strengths of both local and global visual representations. We firstly propose a joint local-global visual representation. Via applying a sparse random projection procedure on the weak classifier set, main local and global spatial structure information of the targetis flexibly embedded in the candidate classifier set with members from multiple complexity families. In addition, to avoid over-fitting our tracker adopts an effective online DeepBoost learning method (ODB). The key capacity-conscious ability of ODB can dynamically tune the complexities of selected classifiers depending on the online training set, which helps to avoid over-fitting and generate a more adaptive and robust tracker. The proposed DeepBoost-Tracker can well encode the object spatial structures and excellently handle object appearance variations. The experimental results demonstrate that our tracker outperforms the traditional boosting-style trackers and achieves very competitive tracking performance in the comparisons with the other state-of-the-art trackers.We propose an ambiguity-regularized multi-period tracking framework to enhance the recovery ability of boosting-style trackers for tracking failures, and meanwhile to keep their strong adaptive ability. For storing the object appearance changes during tracking, we build a tracker set with the current learned tracker and previous trackers learned from multiple frame periods. We incorporate an ambiguity regularization term into the loss function to select the tracker with higher likelihood and lower ambiguity. Average geometry classification margin on the tracker-labeled samples is used to measure the ambiguity of the candidate trackers. In addition, online DeepBoost algorithm is employed to strengthen the adaptive ability of base trackers. Experiments show that the proposed tracker can successfully recover from tracking failures and handle object appearance variations, and it obtains very excellent tracking results in both overall comparisons and attribute-based comparisons with the state-of-the-art trackers on the popular public test dataset.For improving the accuracy of object discovering and tracking initialization in the application of video surveillance, we propose an adaptive and accurate moving cast shadow detection method employing online sub-scene shadow modeling and object inner-edges analysis. To describe shadow appearance more accurately, the proposed method builds adaptive online shadow models for sub-scenes with different conditions of irradiance and reflectance. The models are learned by utilizing Gaussian functions to fit the most significant peaks of accumulating histograms, which are calculated from Hue, Saturation and Intensity (HSI) difference of the moving objects between the background and foreground image. Additionally, object inner-edges analysis is adopted to reject camouflages, which are foreground regions that are highly similar to shadows. Finally, the main shadow regions are expanded to recycle the misclassified shadow pixels based on local color constancy. The proposed algorithm can adaptively handle the shadow appearance changes and camouflages without prior information about illuminations and scenarios. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods.
Keywords/Search Tags:Visual Object Tracking, Object Appearance Model, Joint Local-Global Visual Representation, Online DeepBoost Algorithm, Tracker's Recovery Mechanism, Multi-Period Model, Ambiguity Regularization, Moving Cast Shadow Detection
PDF Full Text Request
Related items