| In recent years,with the continuous progress in the field of robots and the rapid development of artificial intelligence technology,drones have been gradually applied to various fields of daily life and industrial production.The research of related technologies has also received widespread attention in the academic community.Unmanned aerial vehicle is a special type of autonomous robot equipped with specific sensors.Estimating the motion state by the sen-sors carried in the process of movement is the technical foundation and core of it.With the continuous development of Simultaneous localization and mapping(SLAM),the theoretical framework for vision-based UAV positioning and mapping tends to be stable,but there are still many problems in specific theoretical practice.On the other hand,the development of multi-sensor data fusion technology has made it possible for drones to use multi-source data fusion to build a three-dimensional model of the environment.Therefore,research on build-ing three-dimensional reconstruction methods based on multi-sensor data fusion of drones will automate UAV And the development of intelligence is of great significance.Based on the current research results,this paper proposes a building three-dimensional model fusion method of UAV multi-sensor fusion.This method combines a monocular camera and an inertial measurement unit(IMU)based on visual SLAM)And the information of the Global Navigation Satellite System(GNSS),combined with algorithms such as feature extraction and matching,pose optimization,closed-loop detection,and point cloud densifi-cation,the construction of dense three-dimensional models of buildings and the use of The standard data set and field experiments carried out comparative experiments and results anal-ysis of the method in this paper.First,based on the ORB-SLAM2 algorithm,this article combines monocular visual information and IMU information into a visual inertial odome-ter through a tightly coupled data fusion method.Specifically,the ORB feature algorithm is used to complete the image feature extraction,and the key frame of the image is further se-lected,and the IMU pre-integral amount is used as the inter-frame constraint to complete the initialization and local pose image construction.Then use the global information provided by GPS combined with the visual inertial odometer to jointly optimize the global posture map as the constraints of the posture map,so as to obtain a more accurate UAV posture esti-mate by coupling multi-sensor data.At the same time,this article takes GPS information as a prerequisite for closed-loop detection,and uses a bag-of-words model for closed-loop detec-tion and relocation when a certain threshold is met to increase the accuracy and robustness of the system.Finally,based on the pose estimation to obtain the sparse map,the multi-view geometric method based on patch is used to complete the construction of the dense model of the building.In this paper,the EuRoc public data set is used to evaluate the pose estimation accuracy and optimization effect of the method in this paper.The UAV experimental platform is built to verify the three-dimensional reconstruction method of the building model under real scenes.Experiments show that the three-dimensional building reconstruction method based on multi-sensor data fusion of UAVs in this paper obtains more accurate and global-scale camera pose estimation through multi-sensor data fusion,and uses global sensor data as a priori for closed-loop detection.It can output more accurate dense point cloud models and the system robustness has been enhanced. |