Font Size: a A A

Multi-scale Modeling Of Underground Parking Lot For Accurate Localization Of Intelligent Vehicles

Posted on:2021-04-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:G HuangFull Text:PDF
GTID:1482306497464694Subject:Traffic and Transportation Engineering
Abstract/Summary:PDF Full Text Request
High-precision positioning is one of the core issues to achieve intelligent vehicle autonomous driving.At present,intelligent vehicles in outdoor scenarios rely on high-precision differential GPS(Global Positioning System)and high-cost INS(Inertial Navigation System)to achieve high-precision positioning.The underground parking lot,as a continuation of outdoor transportation,is the last procedure of intelligent vehicle positioning.The traditional positioning technologies that are suitable for outdoor scenes cannot obtain stable positioning results in indoor envrionments because the GPS signal is blocked.The research in this paper is oriented to the problem of intelligent vehicle positioning in the underground parking lot.This research adopts the strategy of multi-scale scene feature representation and multi-source positioning data fusion.Based on the scene representation model,multi-view visual positioning is fused by Kalman filtering to improve the accuracy and robustness of intelligent vehicle positioning in underground parking lots without GPS signals.The main research work of this paper is as follows:First,a multi-scale feature extraction method from "Coarse to exquisite " is proposed.Using Wi Fi fingerprint features based on weighted AP(Access Point)as coarse-scale features;based on traditional SURF(Speeded Up Robust Features)and ORB(Oriented FAST and Rotated BRIEF)feature extraction methods,it is improved and a global feature is proposed as the medium-scale feature.Using deep learning methods,a new scene semantic calculation method is proposed,and a method of scene semantic feature training based on twin networks is proposed.The trained semantic features are also used as middle-scale features;SURF and ORB local features are used as the representation of internal points of the image.Compared with the traditional single visual feature extraction method,the multi-scale feature extraction method can extract the Wi Fi information,local feature information,global gray information,and scene semantic information of the scene in all aspects,which improves the ability of scene representation.Secondly,a scene representation model for intelligent vehicles is constructed.Based on multi-sensor data such as Wi Fi receiver,multi-view monocular vision,and binocular vision,a "two-layer" scene representation model based on "three elements" characterization is proposed,including a dense node layer composed of a series of dense nodes,and a sparse node layer consisting of sparse nodes.Each node contains three elements: scene features,3D data,and trajectory information.Among them,the scene features include Wi Fi fingerprint features,global features,local features,and semantic features;3D data is obtained by binocular 3D reconstruction and monocular3 D reconstruction.In addition,the trajectory information is calculated by a plane odometer,which indicates the relationship between the nodes.The existence of three factors fully guarantees the uniqueness of each node.Based on the characterization model,the established intelligent vehicle positioning system is used as a data acquisition platform,and corresponding experiments are designed and carried out on the proposed method.Based on the established scene representation model,a multi-scale localization method based on Coarse-to-fine strategy is proposed.The method includes coarse localization based on Wi Fi fingerprint matching,node-level localization based on multi-view image feature matching,and metric localization based on three-dimensional information.First,Wi Fi localization is used to determine the coarse positioning range,and within the coarse positioning range,multi-view image matching is used to determine the nearest node.Front-view and celling-view images are matched with dense node layer images,and a Hybrid-KNN(K-Nearest Neighbor)method is proposed to fuse image feature matching results of different scales.This paper proposes a GW-KNN(Gaussian Weight-KNN)method,which can fuse local feature matching results and global feature matching results of different scales and different dimensions.A scene classification and recognition algorithm based on twin networks is proposed.The pre-trained deep learning network can be used to classify the images collected by the current vehicle to obtain the closest node of the current image.After obtaining the nearest node,using the correspondence between the local features and the 3D data of the candidate nodes,by solving the perspective n-point problem,the position coordinates of the intelligent vehicle in the scene are finally obtained.Finally,a road speed odometer calculation method named HSP-VO(High-Speed Pavement Visual Odometry)based on an down-view speed camera is proposed,and the positioning results based on the scene representation model are fused with the positioning results of HSP-VO.HSP-VO uses a high-speed camera to perform high-quality imaging of the road surface,a feature area selection method is proposed to reduce the feature point extraction area to improve the calculation speed,and this method uses the plane characteristics to establish a position estimation algorithm.The Kalman filter algorithm was used to fuse the positioning results based on the scene characterization model and the high-speed camera road odometer localization results.Utilizing the high frame rate characteristics of high-speed camera road odometer,the problem of positioning based on scene representation model due to the large distance between nodes is further improved,and the accuracy and robustness of localization is further improved.The proposed method had been tested in two different scenarios.The experimental results show that after fusing the positioning results of HSP-VO and the scene characterization model,the positioning errors are all below 1 meter,and the average positioning error is 0.4 meters.
Keywords/Search Tags:Intelligent vehicle localization, scenarios representation, sensors fusion, multi-scale matching
PDF Full Text Request
Related items