| Under the wave of the metaverse,virtual reality technology has gradually penetrated into various industries,and the demand for combining and interacting with virtual reality technology and real scenes is becoming increasingly widespread and urgent.Therefore,higher requirements have been put forward for the real-time or high-precision reconstruction technology of real scene 3D maps in virtual reality.At the same time,the problem of poor virtual reality visualization of maps built in real scenes also urgently needs to be solved.In view of this,this article explores the 3D map reconstruction technology for real scenes in virtual reality,in order to meet the needs of indoor and outdoor 3D map reconstruction and 3D map virtual reality visualization,and further expand the application of virtual reality technology in real scenes.First,aiming at the requirements of real-time reconstruction and visualization of outdoor real scenes,a real-time reconstruction and visualization algorithm of outdoor real scenes is proposed.3D laser radar is used to collect scene information to estimate pose and reconstruct3 D maps.3D maps are effectively compressed through voxel Downsampling,and pose and point clouds are incrementally transmitted to the virtual reality platform for visualization based on ROS communication mechanism,Visual information can further guide operators in controlling robot motion.Experimental validation was conducted in real scenes of outdoor squares,gardens,and corridors.The results showed that the algorithm’s map reconstruction frequency was 1Hz,and the map reconstruction error was 0.04 m.Then,in response to the real-time reconstruction and visualization requirements of indoor real scenes,a real-time reconstruction and visualization algorithm for indoor real scenes was proposed.The RGB-D camera and IMU were tightly coupled to estimate pose and reconstruct a 3D map.Registration constraints,odometer constraints,and loop back constraints were introduced to optimize the sub maps and obtain a globally consistent 3D map.Propose an improved Posion reconstruction algorithm to convert 3D point cloud maps into 3D map models and maps,and achieve model visualization in virtual reality devices.Mapping the repositioning pose in the scene where the robot enters again,achieving the goal of improving visualization.The algorithm was validated in indoor scenes with a map reconstruction frequency of 40 Hz and a map reconstruction error of 0.036 m.Finally,according to the requirements of high-precision reconstruction and visualization of indoor and outdoor real scenes,a high-precision reconstruction and visualization algorithm of indoor and outdoor real scenes based on deep learning is proposed.The proposed adaptive aggregation module uses deformable convolutional adaptive Receptive field to aggregate multi-level features to enhance the network’s feature learning ability;The proposed adaptive inter view aggregation module adapts the weight of feature bodies between views to construct cost bodies more reasonably.The comprehensive error of the network on the DTU dataset is0.345 mm,and the ablation experiment proves the effectiveness of the proposed module.After obtaining a high-precision map,the point cloud was mapped into a virtual reality device through a script for visualization.The effectiveness of the algorithm was verified through indoor and outdoor real scene experiments,with a map reconstruction error of 0.018 m. |