| Simultaneous Localization And Mapping(SLAM)is one of the important basic technologies in Augmented Reality(AR).SLAM enables users to use sensors to perceive scenes in an unknown scene,so as to construct scene maps for superposition display of virtual and real.In recent years,with the development of SLAM technology,it has been possible to achieve realtime and stable estimated pose for cameras in static scenes with obvious features.However,sometimes AR applications need to face large-scale scenes,which contain dynamic objects and repeated textures.Due to the limitation of portability,AR devices have poor computing performance.Therefore,the traditional SLAM method cannot be directly applied to AR application scenarios.Based on the research of ORB-SLAM2,this paper developed a large scene composition and positioning system on wearable portable platform,which can provide real-time highprecision camera pose and construct semantic map in a large range of dynamic scenes with sufficient prior knowledge.Aiming at the requirements of virtual and real objects fusion in AR application,the occlusion discrimination and superposition display of virtual and real objects in a wide range of scenes are realized based on semantic map and camera pose.First of al,this paper integrates the object detection classifier based on lightweight network into the ORB-SLAM2 algorithm framework on portable devices,and uses the point cloud segmentation method to optimize the contour of semantic information extracted by the object detector.The semantic SLAM method is designed to extract high-quality semantic information in the image to assist camera positioning,eliminates the noise of camera estimated pose caused by dynamic object movement,and constructs the scene semantic map.Secondly,for a large range of scenes with repetitive textures in the background of the subject,this article designs an artificial sign library and places the signs in the scene as prior knowledge.The algorithm uses artificial landmarks with known three-dimensional coordinates to quickly estimate the pose of the camera,and constrains the error of the estimation result based on epipolar geometry.Based on the designed semantic SLAM method,a priori artificial flags are added to remove the drift error accumulated by the large-scale scene operation through prior knowledge constraints,assisting tasks such as relocation and loopback detection,and achieving robust tracking of the algorithm in various scenarios.Then,in the case that the tracking loss of visual SLAM method is easily caused by the user’s rapid movement in large-scale scenes,this paper combines the camera pose results estimated by the previous algorithm with the inertial measurement unit data based on extended Kalman filtering.In terms of virtual and real occlusion,considering the limitations of scene scale and computing performance of AR equipment,this paper constructs the static grid model of scene offline through semantic labels,realizes the online refresh of dynamic semantic object grid model based on the observation area of current line of sight,maps the geometric model of real scene to virtual space for virtual and real occlusion discrimination.Finally,the tool set of the system that can run stably on the wearable device is designed:camera location orientation-semantic map construction tool,and augmented reality virtual reality occlusion display tool.Introduced the detailed structure of the system tool set the modules and process,and carried out the related test.It is proved that it can work efficiently in real time in the application scene of the subject,the accuracy of camera positioning and semantic map construction meets the positioning requirements,and the occlusion relationship between virtual object and real object is correct. |