| Three-dimensional reconstruction is a prerequisite for the autonomous operation of the robot in a complex environment.However,there is still a lack of a common three-dimensional reconstruction method that is limited to the complexity of the family environment,the nonstructural characteristics of the scene and the problem of the sensor itself.To this end,this dissertation studies the fusion method of multi-sensor fusion,and solves the key technologies of 3d mapping of service robot in dynamic environment,and the specific research and innovation are as follows:For problem of high-precision 3D reconstruction in complex scenes,a 3D reconstruction method based on left multiplication perturbation model is proposed in order to make reasonable use of the depth point cloud data of sensors and improve the reconstruction efficiency in unstructured scenes.Firstly,3D curvature information is used to extract surface features and edge features in the environment.Then,the left multiplication perturbation model is used to greatly reduce the amount of computation caused by the derivative of scalar trigonometric function.Finally,multi-thread branch registration method is used to quickly match the history frame to improve the efficiency of the algorithm and achieve high precision and fast relocation.The experimental results show that the SLAM framework and reconstruction method proposed in this dissertation significantly improve the success rate and localization of tasks compared with traditional SLAM methods.With the three-dimensional reconstruction problem in the environment of dynamic object interference,in order to meet the requirements of robots’ environmental understanding ability in diverse scene structures and improve the stability of robots in dynamic scenes,this dissertation proposes a three-dimensional reconstruction method of dynamic object elimination based on joint constraints,so as to improve the stability of robots in dynamic scenes.Firstly,the current environment is identified by panoramic segmentation to separate the dynamic objects from the background.Then,the key points and depth data in the dynamic object are correlated and joint clusters conforming to physical constraints are established.Finally,the joint constraint is used to predict the motion of the object,and the quasi-static part meeting the static requirements is obtained to assist the odometer extraction.The experimental results show that the dynamic object removal method proposed in this dissertation significantly improves the success rate of reconstruction and dynamic object removal compared with ORB-SLAM2.The average value of RPEt(m/f)in our method under KITTI data set is only 0.051,which has stronger generalization and anti-jamming ability.For the long-term full-time mapping problem,in order to improve the system robustness under full-period illumination conditions and reduce the impact of the physical characteristics of the sensor itself and calibration errors on long-term mapping,this dissertation proposes a multi-sensor fusion framework based on the results of the above two sections and the fusion of IMU data.In this dissertation,the laser and IMU are tightly coupled in a pre-integral way to improve the update frequency of the laser inertial measurement unit and reduce the dependence on environmental characteristics.In addition,the visual feature weight method is used to integrate the visual and IMU to reduce the reprojection error.Finally,a posteriori distribution is fused with IMU data to greatly reduce the accumulated error caused by sensor failure.In the real environment test,the proposed method has achieved a good effect of mapping.Compared with the traditional mapping method,it can maintain a very ideal reconstruction accuracy under the condition of sensor failure,and has a relatively outstanding anti-interference ability. |