With the progress of society and the rapid development of science and technology,autonomous mobile robots have been applied in many fields of social production and life,and play an increasingly important role in them.Visual simultaneous localization and mapping(VSLAM)is a key technology to ensure the autonomy and intelligence of autonomous mobile robots,among which monocular SLAM has been widely concerned by researchers by virtue of the advantages of simple camera structure and low cost.At present,the monocular SLAM can obtain a higher pose estimation accuracy,however,most of the maps it builds are sparse feature point maps or semi-dense maps calculated by using the significant gradient difference in the image,which is difficult to meet the actual needs of local obstacle avoidance and navigation of autonomous mobile robots.Based on deep learning to obtain dense environmental depth information in the monocular image,in this paper we use the classical ORBSLAM(ORIENTED FAST and BRIEF)method to obtain the robot's pose information with the realistic scale,and introduce the truncated signed distance function(TSDF)model to build a dense map.A method of building a dense map based on deep learning is proposed,and the design and implementation of this method for embedded platform are studied.The main work of the paper includes the following sections:Firstly,in order to obtain the dense environmental depth information in the monocular image,a monocular depth estimation network based on the encode-decode structure is designed.The encode part extracts the deep-seated features of image through ResNet network,and obtains the multi-scale context information of the image by setting the atrous convolution of different dilation rates,which is used to estimate the dense image depth information,and adds structural similarity error in the loss function,which can better reflect the clues of the structure of the object in the image and obtain a more accurate estimate of the depth.Experiments have proved that the designed network can estimate image depth well,and provide accurate dense depth information for the monocular SLAM dense mapping.Secondly,in order to improve the global consistency and precision of the dense map,a monocular SLAM dense mapping method based on the TSDF depth fusion model is proposed.First of all,the pose with real scale is obtained by monocular visual inertial SLAM with tightly coupled IMU measurement,and the more accurate sparse feature points generated during the pose estimation process are used to calibrate the scale of estimated depth,and further improve the accuracy of the estimated depth.Then,based on the depth fusion of TSDF model,the dense map is constructed,and the map optimization strategy based on the deformation graph is designed to improve the accuracy and global consistency of the map.The effectiveness of the proposed method is verified by experimental analysis.Finally,in order to make the proposed monocular SLAM dense mapping method easily applied to the mobile robot platform,in this paper we design and implement a monocular SLAM dense mapping system based on embedded platform.Taking advantage of the collaborative computing processing of multiple embedded platforms,based on the distributed idea,the proposed monocular SLAM dense mapping method is modularized to be deployed to multiple devices,and multi-device communication and data sharing is realized based on Ethernet connection,and the monocular SLAM dense mapping based on lightweight embedded platform is realized.Experiments verify the performance of the monocular SLAM dense mapping system based on the embedded platform,and meet the application smaller and lightweight autonomous mobile robot platform. |