Multi-sensor Calibration And Its Applications In Structural Road Sensing | | Posted on:2022-09-04 | Degree:Doctor | Type:Dissertation | | Country:China | Candidate:Y N Su | Full Text:PDF | | GTID:1522307061973979 | Subject:Computer Science and Technology | | Abstract/Summary: | PDF Full Text Request | | Intelligent vehicle system is one of the research focuses in artificial intelligence and pattern recognition.Intelligent vehicles perceive their surroundings through different types of sensor data so as to make accurate assessments and decisions.Camera,IMU and LiDAR are the most commonly used on-board sensors.Camera can provide image data with a wealth of color and texture information.IMU collects real-time acceleration and rotation angular velocity of vehicle movement through accelerometer and gyroscope,respectively.LiDAR provides accurate3 D point cloud data of vehicle’s surrounding environment.Based on monocular vision,IMU and LiDAR are introduced in this thesis to combine the acceleration or the depth information with image data.And then calibration methods of the sensors and their applications in structural road scenes are proposed.The main research results and innovation points are as follows:(1)An efficient camera self-calibration method based on orthonormal hypothesis and homography constraints is proposed,which can simultaneously estimate the focal length and principal point coordinates of the camera.Firstly,the roll and pitch angle of the camera relative to the inertial coordinate system are deduced from the IMU data.Then,the roll and pitch angle are used to align the y axis of the current view with the gravity direction,and the rotation transformation is simplified to 1 degree of freedom.Next,based on the assumption that the ground plane is orthogonal to the gravity direction,the relationship of Euclidean homography matrix and the transformation matrix is simplified to obtain the homography constraints.Finally,a two-point-five method is proposed to estimate the focal length when the principal point of the camera is known,and a three-point-five method is proposed to simultaneously estimate the focal length and the principal point when the principal point is unknown.Experiments on simulated and real data show that the proposed method has higher efficiency and accuracy than the existing similar methods.(2)A globally optimal relative pose estimation algorithm based on the essential matrix and semidefinite program(SDP)relaxation is proposed.Although the previous algorithms of pose estimation have achieved promising results,these methods have not solved the globally optimal solutions.After obtaining the intrinsic parameters of camera by the self-calibration method in(1),the essential matrix is simplified using IMU data.The problem of relative pose estimation is converted to a quadratically constrained quadratic program(QCQP),which can be efficiently solved using SDP relaxation.Finally,a least square method for degradation case is proposed.The evaluations on simulated data and real data show that the proposed algorithm can effectively estimate the relative pose of vehicles.It can be used as a complement to the existing methods to increase the reliability and speed of structure-from-motion(SFM)systems.(3)A hybrid approach to LiDAR-camera calibration based on motion trajectory and feature matching is proposed.Not only does it solve the problem of low-accuracy of motiontrajectory based methods,but also it solves the problem that the feature-matching based methods are sensitive to initial value.The approach includes two stages: initial calibration and calibration optimization.In the initial calibration stage,a new closed-form solution based on the traditional hand eye calibration theory is proposed.The rotation matrix is transformed into quaternion form.Then the defined objective function is minimized to calculate the initial transformation matrix by Lagrange multiplier method with the camera motions obtained by pose estimation method in(2)and the LiDAR motions obtained by ICP algorithm.The calibration optimization stage utilizes ICP algorithm to get the accurate calibration results by registering two groups of point cloud extracted from the camera coordinate system and LiDAR coordinate system,respectively.The proposed method has been successfully tested in the external parameter calibration between the 64-line LiDAR and camera,and between the 16-line LiDAR and camera.Experimental results show that the proposed method has better performance and higher accuracy than the latest methods.(4)An illumination-invariant model based vanishing-point detection approach and a graphsearch model based lane and traversable region detection approach are proposed,solving the problems that the vanishing point detection is sensitive to lighting conditions and the parameterized geometric model cannot accurately fit the boundary in road detection.Firstly,an illumination-invariant image representation is proposed for monocular vision to remove the adverse effect of shadow.And then the line segmentation detection(LSD)and soft-voting are utilized to detect the vanishing point.Under the constraint of vanishing point,a nonparametric model based on the Dijkstra algorithm and the fused data is proposed for the lane and travelable area detection.The algorithms have been tested on more than 4000 frames of road data from several public datasets.Experimental results show that the algorithm works accurately and robustly on structural road scenes. | | Keywords/Search Tags: | camera self-calibration, LiDAR, joint calibration of multiple sensors, relative pose estimation, vanishing-point detection, lane detection, structural road detection | PDF Full Text Request | Related items |
| |
|