Font Size: a A A

Research On Camera Localization Algorithm Based On Visual Inertial Fusion

Posted on:2024-02-03Degree:MasterType:Thesis
Country:ChinaCandidate:J Z CheFull Text:PDF
GTID:2568307106967509Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Simultaneous Localization And Mapping(SLAM)refers to the precise locate itself and construction of high-precision scene maps by unmanned equipment using sensors such as cameras equipped in unfamiliar and complex environments.It is now mainly used in the fields of UAV,auto pilot system,intelligent AI machine.In the process of camera localization,it is often necessary to process different sensor data,and solve the camera pose through a positioning algorithm to complete the camera positioning.When using the image obtained by the industrial camera and the data obtained by the inertial sensor,e.g.Inertial Measurement Unit(IMU),to perform data fusion to complete the camera localization,temporal delays are often caused due to the asynchronous timestamps between the sensors,which will decrease the camera localization accuracy.For this problem,a visual-inertial odometry method based on online temporal delay estimation is proposed.When the IMU is initialized,the static initialization is used to quickly complete the IMU initialization,so as to determine the initial state of the IMU,and after acquiring the image data,the real time image is estimated through the temporal delay.In order to establish the exact of the data relation between left image and right image,the round-trip matching method is used to construct the data association,and the tight coupling method is used in our method,and using an optimization-based method to jointly optimize the visual data e.g.3D points and IMU data,According to the calculation of the actual time image,the temporal delay is added in the back-end sliding window optimization as a variable to be optimized,and the derivative of the temporal delay is deduced in detail.IMU pre-integration is used for IMU measurement data to avoid repeated integration during optimization,and the compensated image timestamp is used to find the IMU data to be pre-integrated.Finally,the experimental demonstrate that the camera localization algorithm can maintain good stability in the presence of temporal delay.In addition,to improve the optimization accuracy of the Bundle Adjustment(BA),a revised bundle adjustment algorithm based on 3D points only is proposed.The algorithm first improves the traditional BA algorithm,and in the process of optimization,the camera pose is expressed as a function about 3D space points,so that the objective function becomes a function only about 3D space points.Secondly,the revised BA algorithm can be used to dynamically select the 3D space points to participate in the optimization when determining the size of the optimization,thereby reducing the size of the optimization.Finally,after obtaining the optimal 3D space points,these points can be used to obtain the optimal camera pose.The experimental results demonstrate that revised bundle adjustment algorithm can reduce the size of optimization while ensuring the accuracy of camera positioning.
Keywords/Search Tags:sensor fusion, visual-inertial odometry, camera localization, bundle adjustment, size of optimization
PDF Full Text Request
Related items