Font Size: a A A

Research On Indoor Visual Inertial Odometry Of Educational Robot

Posted on:2020-07-09Degree:MasterType:Thesis
Country:ChinaCandidate:T LiFull Text:PDF
GTID:2417330578974630Subject:Education Technology
Abstract/Summary:PDF Full Text Request
Simultaneous locating and mapping(SLAM)has been a hot topic in the field of robotics and computer vision since its appearance.The monocular camera vision attracts the attention of SLAM researchers because its low price and captures images which have rich information of the surrounding scenes.However,solving the pose and recovering the environment scene structure through the monocular camera can not work well when encountering occlusion,fast motion,texture sparse area,texture similar local area,and monocular vision still has the problem of the scale.The IMU is an internal sensor that senses the motion of the robot or the carrier itself.Simple IMU pose calculations are also difficult to escape the effects of cumulative errors.In addition,due to the existence of sensor measurement noise,whether it is pure inertial navigation or pure visual solution,simply relying on recursive operation will inevitably lead to cumulative error.The low-precision IMU will diverge in a very short time.Combining the two sensors can take advantage of redundant measurements to suppress cumulative errors.For example,both ways can be used to determine the relative pose.At the same time,the measurement of the two different sources of IMU and vision also makes the IMU bias observable so that it can be effectively estimated in the optimization.In addition,the lack of absolute scale in monocular vision can also be solved by the introduction of inertial information.In this paper,a visual inertial odometry is designed,which combines visual information with inertial information,restores the absolute scale of trajectory,and achieves accurate estimation of robot pose.We discuss and practice the monocular vision to estimate the camera pose and restore the depth of the map point.The Kinect V2 camera was calibrated to show changes in the effects before and after calibration.The IMU pre-integration theory is derivation in detail to fuse the visual inertial information.The back-end nonlinear optimization method is discussed and the visual inertial information fusion energy equation is designed,to achieve the fusion of visual inertial information.Finally,we conducted experiments on our program and analyzed the results.The analysis shows that our visual odometry has achieved our expected goal,which can effectively restore the scene absolute scale and estimate the pose.The technology studied in this paper will provide a basis for educational robots to perform other higher-level applications in indoor scenes,such as automatic charging and navigation.
Keywords/Search Tags:Sensor fusion, Visual inertial odometry, IMU pre-integration, Non-linear optimization
PDF Full Text Request
Related items