Font Size: a A A

Multi-source Data Fusion For Target Tracking Method

Posted on:2015-07-22Degree:MasterType:Thesis
Country:ChinaCandidate:L X XiongFull Text:PDF
GTID:2298330452953388Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Target tracking is an important topic in the field of computer vision research andhas very broad applications. Traditional visual tracking methods suffer from manyproblems such as the occlusion, illumination variety and complex background.Although we can relieve these problems by introducing RGB-D sequence which isvideo and depth information obtained simultaneously by Kinect camera, but thecoverage of RGB-D sequence is limited and the data accuracy dropped significantlywhen the distance between the target and the camera is relatively large. For all thesereasons, we need fuse more information to improve the performance of target tracking.Recently Android smartphones become increasingly popular and the inertial sensor ofthese smartphones can achieve a wide range of target positioning. It can make up forthe deficiencies that the coverage of RGB-D sequence is limited and solve theproblems in visual tracking. However, to achieve ideal tracking results, fusingmulti-source data information properly is a great challenge. In this paper, we proposea target tracking method that integrates multi-source data by utilizing multi-sourcesensing information, including the video, RGB-D sequence and the inertial sensordata. Through this method, we can achieve continuous and stable target tracking.Specific research work is as follows:1. We propose a positioning method that combines inertial sensor and RGB-Dsequence. Firstly, accelerometer and gyroscope of inertial sensor is used to realizeinertial sensor positioning. Then, in a local area, RGB-D data acquired by Kinectcamera is used to realize local target positioning through the combination of textureand depth by using the particle filter tracking framework. Finally, the local position isused to eliminate the cumulative error of the inertial sensor positioning by using TPSdeformation model, thus making the inertial sensor positioning continuous and stable.2. We propose a tracking method that integrates video data and inertial sensor.Firstly, through the sparse vector of the observation area, video data is used to realizevisual tracking by using compressive tracking algorithm. Then, through a coordinateprojection transformation process, the result of the inertial sensor positioning ismapped onto the video image plane, where the visual tracking position and themapped position are fused by the proposed similarity measure to get continuous andstable tracking result with tracking error detection and correction. In order to verify the effectiveness of our method, we design the inertial sensordata collecting systems and the multi-source data synchronous collecting system tocollect and process the data that the experiments require. Experiments on real scenesshow that, the proposed method outperforms the tracking method that only uses singlesensor data and behaves robustly on the processing of target occlusion, illuminationchange and the interference of similar texture or complex background.
Keywords/Search Tags:Target tracking, inertial sensor positioning, RGB-D sequence positioning, multi-source data fusion
PDF Full Text Request
Related items