With the development of intelligent logistics and the encouragement of the national policy of "Made in China 2025",China is paying more and more attention to the development and application of related intelligent devices.Automated Guided Vehicles(AGV)and Augmented Reality(AR)devices are increasingly being used in logistics and production fields.Simultaneous Localization and Mapping(SLAM)is an important method for these devices to perceive the surrounding environment for visual localization tasks under the condition of no prior map.However,visual SLAM localization involves a large amount of data and computing,which requires large memory and computing resources of the devices.The battery,performance,and computing capabilities of general logistics AGV and AR devices are limited,and computing-intensive operations such as SLAM can consume a large proportion of their total energy consumption,which can affect the operation time and performance of the devices,thus hindering the development of device work tasks.Thanks to the rapid development of network communication,the end-cloud collaborative model is becoming more and more attractive.By offloading part of the computation of visual SLAM from mobile devices to servers,the burden of visual localization on the device can be reduced while ensuring the positioning accuracy,and as much as possible,the client’s computing resources and network resources can be saved,thus helping to solve the above problems.Therefore,based on the above analysis,this thesis proposes an end-cloud collaborative visual SLAM method for logistics AGVs and AR devices,which includes the following steps:(1)Based on the ORB-SLAM3 algorithm,which is at the forefront of modern SLAM systems,the end-cloud partitioning method is defined,and the feature points extraction step is left on the mobile device,while the remaining tracking,mapping,and loop closing detection processes are offloaded to the server.(2)The feature points are organized and compressed to reduce data redundancy,making it more suitable for network transmission.(3)The feature points extraction step is designed with an adaptive adjustment strategy for the feature points quantity requirements,so as to save the client’s computing resources and network resources as much as possible while ensuring the positioning accuracy.The end-cloud collaborative visual SLAM was successfully deployed on common AGV device Hikvision Q7L-2000 A and AR device Microsoft Holo Lens 2,with a delay of only about 0.1 seconds per frame,which fully meets the requirements of real-time positioning.The method was also tested on the mainstream Eu Roc public dataset,and the results showed that the average network transmission data volume was only 5.95%of directly transmitting the original image,and the positioning accuracy was only a few millimeters different from the original system.That is,while saving the client’s computing resources and network resources,the same positioning accuracy as the original method is achieved.In addition,this method has certain generality,as it is not limited to specific algorithms or devices,nor is it limited to applications in the logistics field. |