Font Size: a A A

Calibration And Data Fusion Between 3D Laser Scanner And Monocular Vision

Posted on:2010-06-27Degree:MasterType:Thesis
Country:ChinaCandidate:D ChenFull Text:PDF
GTID:2178360302960803Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Possessing good capability of scene understanding is a great challenge for autonomous mobile robots working in complex environment. Nowadays, with the fast development of robotics technology and related academic domain, autonomous mobile robots gradually step from the indoor into the outdoor and from the structured environment into the unstructured environment. So this developing trend raises higher demand on new resolutions and technology breakthroughs. In order to achieve the higher intelligent behaviors, information sensing, mining and fusion would be key parts which determine to what extent information can be effectively used. The intention of this dissertation is to do research on the calibration and information fusion between 3D-laser and monocular vision.The dissertation first discuss the shortcomings of the single sensor which provides limited information for mobile robot and give out advanced multi-sensor mobile robot systems with cutting-edge and practical technologies to illustrate that information fusion between different sensors provides an effective way to capture the valuable information needed.Then hierarchical calibration method is adopted to calibrate the monocular camera online. The exiting 3D-laser and monocular vision calibration methods are analyzed based on which a new automatic method is invented to overcome the shortcomings, such as errors caused by manual selection, confined distance configuration, noise sensitivity, etc. During the calibration, the laser corner points are autonomously extracted through detection and correction stages. After that iterative optimized method is adopted to find the parameters. Three stages error analysis approach is used to guarantee the correctness of the results.Finally, 3D-laser and monocular vision information are fused in three different ways in outdoor large scale environment. The fusion results are colored laser point cloud map, depthmap constructed by projecting laser data onto image plane and image local descriptor extracted by combining depth information. This work lays a solid foundation for future researches on enhanced scene modeling, deeper information fusion and virtual reality in complex scenes for outdoor mobile robot.
Keywords/Search Tags:multi-sensor information fusion, outdoor scene modeling, 3D laser, monocular vision, automatic calibration
PDF Full Text Request
Related items