Font Size: a A A

Calibration Of Omnidirectional Vision Sensors

Posted on:2014-08-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y LinFull Text:PDF
GTID:1268330425481377Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
The omnidirectional vision sensors discussed in this dissertation include a passive vision sensor-an omnidirectional camera and an active vision sensor-an omnidirectional lidar. With a large field of view, this kind of sensor is widely used in environment perception of automatic land platform. Due to its special geometric characteristics, its calibration is always a fundamental question in computer vision field.In this dissertation, we study on the calibration of omnidirectional camera and lidar, mainly focus on three aspects:calibration and self-calibration of omnidirectional cameras, extrinsic calibration of a lidar-camera system. In order to achieve precise omnidirectional camera calibration, we propose a robust calibration method based on a viewing sphere which improves the accuracy of the results. We take advantage of the idea of compressive sensing based low-rank texture recovery to achieve the self-calibration of omnidirectional cameras, and reliable results are achieved. Geometric constraint and motion estimation are adopted to solve the joint calibration of an omnidirectional lidar and a camera.The main contributions are outlined as follows:1. To provide accurate correspondences between image and space information, we propose an omnidirectional camera calibration method via the viewing sphere. The geometric properties of two mutually orthogonal sets of parallel lines on the viewing sphere can provide a closed form solution for estimation of intrinsic and extrinsic parameters. Benefitting from the relative precise estimation of the intrinsic and extrinsic parameters, this method can further reduce the uncertainty of calibration results compared with most of the state-of-the-art methods.2. We propose an omnidirectional camera self-calibration method based on compressive sensing, and the sensor can be quickly calibrated by a simple scenario. The method calibrates the camera by recovering the low-rank texture in the image, and only one image is demanded. Furthermore, we define a projection function for spherical large-field-of-view low-rank texture to meet the imaging characteristic of omnidirectional cameras. Different from most of the self-calibration methods, this method does not rely on low-level features such as edge, corner, and is weakly affected by external factors such as light, shadow etc. More reliable results can be obtained.3. We put forward two methods for a lidar-camera system extrinsic calibration based on natural scenarios. Compared with the stereo camera system, an omnidirectional lidar-camera system is of low computational complexity, high accuracy and less affected by environment when constructing3D scenes. To fuse the data of lidar and camera effectively, we need to calibrate the extrinsic parameters of the lidar-camera system. By defining a reference world coordinates according to a trihedron in the scene, we make use of geometric constraints or matched features of the trihedron to estimate the relative motions between the lidar or camera coordinates and the world coordinates. If the relative motions are known, the extrinsic parameters between the lidar and camera are easy to calculate. This method is flexible and does not need specific calibration objects. Furthermore, it does not largely rely on the input information and only two frames of data are enough to get the reliable results.
Keywords/Search Tags:single viewpoint, omnidirectional cameras, lidar, viewing sphere, low-rank texture, sensor calibration
PDF Full Text Request
Related items