Font Size: a A A

The Research On Tightly-coupled Visual-inertial Robot Localization

Posted on:2022-04-02Degree:MasterType:Thesis
Country:ChinaCandidate:L M ChengFull Text:PDF
GTID:2518306557469854Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
In the wake of increasing growth of computer vision and robotics technology,the localization method based on visual and inertial sensors has become one of the key research points in the field of simultaneous localization and mapping.The information collected by the visual sensor and the Inertial Measurement Unit(IMU)is highly complementary.The IMU information can complete the rough pose estimation when the robot is moving fast or in terrible scenes such as weak texture and low light.Camera information can help correct the drift accumulated by the IMU.At present,point features are frequently used in the visual front end,but they have a strong dependence on the scene.In terrible scenes such as weak texture and low light,it is very easy to be difficult to extract or tracking failure or even impossible to extract.However,in artificially designed and constructed scenes,there are a large number of line features,which can represent images together with point features,and have higher-level semantic information.In addition,RGB-D cameras can provide depth image and color image,which can solve the problem that monocular cameras cannot provide scale information to a certain extent,and build dense maps.Based on these issues,this thesis proposes two solutions based on visual-inertial fusion localization.The specific research content of the thesis includes:First of all,in harsh environments such as weak textures and scarce features,the visual inertial odometry has the problem that the front end cannot extract point features,which leads to large localization errors.To solve this problem,based on the theory of multi-view geometry in computer vision,multi-sensor fusion and graph optimization,this thesis proposes a visual inertial odometry based on point and line features.In this thesis,ORB feature points and FLD line features are used as landmark features in the front end of the visual,and a line feature reprojection residual is constructed at the back end,which is combined with the point feature reprojection residual and the IMU pre-integration residual to form a new objective function.And then use the factor graph model to optimize.The performance of the proposal is tested and analyzed on the EuRoC dataset,and it is compared with VINS-Mono,PL-VIO,and PL-VINS in terms of localization accuracy and computational efficiency.Experiments show that compared with VINS-Mono,the addition of line features increases the localization accuracy of the system by about 13.34%,and the visual front end is both real-time and robust in weak texture scenes.Secondly,in the visual inertial odometry,the monocular camera cannot provide scale information.The aid of IMU can only make a rough estimate of the scale.The inaccuracy of the scale information will seriously influence the accuracy of the system.To solve this problem,this thesis proposes a visual inertial odometry based on RGB-D camera model,multi-sensor fusion and graph optimization theory.The system obtains the pixel depth information directly from the depth image to get a more precise scale.According to the pixel depth information,the feature points and feature lines are classified,the landmark features are reconstructed by two methods,and the estimated robot pose and depth image are used to generate dense map.The performance of the proposal is tested and analyzed on the RGB-D visual inertial dataset,and it is compared with VINS-Mono,PL-VINS and VINS-RGBD in terms of localization accuracy.Experiments show that compared with VINS-RGBD,this localization accuracy of this algorithm is about 11.71%higher.
Keywords/Search Tags:visual inertial odometry, point and line feature, depth information, graph optimization
PDF Full Text Request
Related items