Simultaneous localization and mapping(SLAM)refers to a robot use information obtained from sensors to estimate pose in unknown environment,then determines the specific position of the mobile robot,and builds a map consistent with the surrounding environment.In order to adapt to the different movement of mobile robots and changing environments,SLAM technology is constantly developing.visual SLAM uses a camera as a sensor,which is simple structure and at the low price.The input is the captured camera frame.The camera can collect a large amount of information and has large measuring range.In the case of fast motion and illumination changes,the visual-inertial fusion method is used to improve the accuracy and enhance robustness.The main content of this paper is as follows:First of all,most of the existing visual SLAM algorithms estimate the trajectory of robot only rely on point features.However,it is difficult to find a sufficient number of reliable features in an indoor environment with weak texture or insufficient light,which is prone to unstable operation and faces poor robustness and limited real-time performance and other problems.In order to extract more features,the system adopts the method of point features combined with line features.The Shi-Tomasi detector is used to extract point features,and the EDlines algorithm is used to extract line segments,combined with KLT optical flow tracking point features,and LBD line segment matching methods are adopted.This method improves efficiency,solves the problem of computation time,and ensures the stability of features in different environments.In order to adapt to the different modules of the front-end and back-end,the line features are represented in two different ways,and the front-end uses the Plücker method to calculate the reprojection error of the line.Orthogonal representation of space lines is used in the back-end,construct a back-end optimization error model of point-line fusion,which solves the instability problem in the optimization process.Secondly,the process of visual inertial joint initial estimation and back-end nonlinear optimization is studied.In this paper,the raw data collected by the visual and inertial sensors are processed separately by loose coupled and the state of the system at the initial moment is obtained,which provides a good initial value for the back-end optimization.Then the tight coupled method is adopted to fuse the visual and inertial information,Then the tightlycoupled method is adopted to fuse the visual and inertial information uniformly,and the IMU pre-integration error and the point-line reprojection error are construct a cost function for data fusion in the sliding window optimization framework.The state variables are optimized by minimizing the cost function,then the accuracy of pose estimation is improved.Finally,front-end feature extraction experiments was carried out to test the robustness of line features and the advantages of adding line features in light,fast motion,and low texture sequences.The performance of different line segment extraction algorithms is compared and tested in real scenes and some sequences.The simulation experiment conducted on public dataset are compared with several mainstream visual-inertial systems to prove that the system in this paper has improved the positioning accuracy. |