| With the development of intelligent manufacturing technology,mobile robots are gradually being widely used in various industries.The bio-inspired SLAM(Simultaneous Localization and Mapping)has attracted more and more researchers’ attention due to its low requirement on sensors’ precision and computational cost.For SLAM systems deployed domestic appliances,as well as autonomous inspection robots in industrial facilities,the system’s ability to handle failures becomes critical due to their lifelong tasks.So the robustness of the algorithm to those unpredictable disturbances is the key factor of performance.In the current study,the failure of the system is usually caused by perceptual aliasing.Therefore,in this article,the following researches are done in the aspects of data association,map construction,and relocalization based on the bio-inspired SLAM in perceptual aliasing scenarios:First,the same observation can have multiple interpretations when perceptual aliasing happens.SLAM systems need to be able to avoid failures during working.Aiming at the fact that the interference in the environment may make the correct association have a high cost,a multi-hypothesis-based data association algorithm is proposed,which allows multiple local view cells to be activated simultaneously,providing multiple local hypotheses.Besides,for each hypothesis,separate state estimation is maintained to indicate the global assumption.Connections between cells are updated according to the historical state of cell activation to screen hypotheses.The simulation results show that the method based on multiple hypotheses reduces false positive associations,increases the accuracy of associations,enhances the ability to avoid fatal failures,and improves the robustness of the system.Secondly,in view of the problem that the lack of redundant features cannot provide sufficient information in the perceptual aliasing scenes,based on the mechanism of episodic cognition in the brain,an empirical map that fuses multi-modal information is proposed.The obstacle cell model,which is independent of the robot orientation,is constructed by using head orientation modulation,the lidar data is converted into obstacle cell firing rate,and the multimodal data fusion is realized by establishing the connection between the cell’s population activity pattern and the experience node.The graph relaxation is used for loop closure to construct a consistent map.Simulation experiments show that the activity of the obstacle cell population effectively reflects the structure of the environment,provides a supplement to the visual information,and improves the functionality of the map.At the same time,the closedloop correction overcomes the drift error of the self-motion information and ensures the accuracy of the map.Finally,considering the unexpected events and other faults which cause the loss of the pose information,the algorithm needs to recover its current state in a perceptual aliasing environment accurately and robustly,that is,relocalization.Based on the recall mechanism of navigation cells in the brain,this paper proposes a relocalization algorithm that uses multimodal information to achieve a multi-level matching strategy from coarse to fine and uses the connection between cells to realize the fusion of visual coarse localization and structural information fine localization.Pose cell activity evaluates the stability and consistency of historical positioning and current prediction results.The simulation results show that the algorithm based on the multi-level matching strategy can have a high success rate and accuracy even in an environment with visual and structural aliasing,which improves the robustness and efficiency of the relocalization algorithm. |