Font Size: a A A

Research On Road-Scene Perception Technologies Based On Information Fusion In Structural Environments

Posted on:2015-08-25Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y TanFull Text:PDF
GTID:1108330479979605Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
This dissertation studies some road-scene perception technologies based on information fusion in structural environments. The study focuses on camera-Lidar calibration, camera-Lidar fusion, and several fusion-based perception technologies in structural environments. This work is an essential component of the perception system in the Autonomous Land Vehicles(ALV) of the “Research on key scientific problems of intelligent vehicles driving on highway”(90820302), which is a key project of the major research plan of National Natural Science Foundation of China. The achievements and innovations are as follows:1. A trilinear extrinsic calibration method for an onboard camera is proposed. With the trilinear structure, which widely exists in structural environments, the extrinsic calibration of the camera relative to the vehicle can be achieved. In real applications, this method is more convenient and more stable than the traditional ones with the calibration pattern.2. An automatic extrinsic calibration method for an onboard camera is proposed. Based on the three specific constraints(Relation Constraint, Vanishing Point Constraint, and Ground Plane Constraint) proposed for onboard sensor calibration, and the corresponding optimal method, the extrinsic parameters between the onboard camera and the vehicle can be automatically calibrated. For the first time, even when the vehicle only has plane motions, all the extrinsic parameters of the onboard camera can be accurately calibrated.3. An automatic method for extrinsic calibration between an onboard camera and a 3D-Lidar is proposed. For the first time, even without the initialization of the parameters and the review overlapping of the sensors, the extrinsic parameters between these two sensors can be automatically calibrated. Real applications demonstrate the high efficiency and high accuracy of this method.4. A depth image recovery method in dynamic environments is proposed. With the sparse Lidar points and corresponding high-resolution visual images, the depth value of each pixel can be recovered. This method includes two algorithms:The filter-based algorithm, based on the traditional works, introduces motion information to improve the accuracy and uses approximate sampling to improve the efficiency, and finally achieves a faster and better depth image recovery in dynamic environments.The optimization-based method, introducing the optimal framework widely used in static scenes into the dynamic recovery problem, realizes the first optimal depth recovery in dynamic environments, which further improves the recovery accuracy.5. A depth-field model based on information fusion is proposed, and is used for curb detection. The proposed curb detection method shows strong robustness for shadow influence and obstacle occlusion, and can effectively improve the curb detection range. For the curbs with the height variation of 10 center meters, the detection range of this method reaches 8 meters in horizontal and 30 meters in vertical.The above achievements provide solid technical supports for the long-term autonomous driving experiment, as long as 2000 kilometers, in the near future with the Hong Qi-Series Autonomous Driving System of our university.
Keywords/Search Tags:Autonomous Land Vehicles, Structural Environments, 3D-Lidar, Monocular Camera, Information Fusion, Sensor Calibration, Depth Image Recovery, Curb Detection
PDF Full Text Request
Related items