| In intelligent robot systems for indoor scenes,the localization and mapping technology based on monocular vision has obvious cost advantages.However,the traditional methods based on sparse point features of images could lead to the decline of localization accuracy in complex indoor scenes.Merging the structural features can effectively improve the performance of localizationing and mapping.However,the existing methods lack of spatial structure constraints and have low real-time performance,which limits the application in robots products.Inspired by human brain’s use of high-level visual information for spatial cognition and localization,the fusion and constraint methods of spatial structured information in visual localization and mapping system will be studied.The specific research contents are as follows:(1)To solve the problem of the lack of spatial orientation correlation of structured features,a classification method of structured features based on Manhattan principal orientation extraction was proposed.Firstly,the method tracks the gravity vector based on IMU sensor,constrains the sampling range of vanishing point,and speeds up the hypothesis search and verification process.In order to solve the randomness problem of vanishing point ordering,the attitude observation data is fused by filtering method to realize vanishing point tracking in front and back frames.Finally,according to the main direction of Manhattan,the feature spatial structure prior information is given to lay the foundation for the subsequent fusion and optimization process.Experiments show that,compared with other advanced vanishing point detection algorithms,this method has better real-time performance,can effectively track the main direction of Manhattan in the video sequence,and achieve structural feature classification accuracy of more than95.32%.(2)In order to improve the accuracy and reliability of robot localization mapping in complex indoor scenes,this paper studies a structured feature fusion method based on Manhattan main direction constraints,and optimizes pose and feature parameters through spatial structure information constraints.Firstly,the structure priori is used to obtain the initial estimation of the linear features,and then the observation model of the features is constructed by the reprojection error.At the same time,the distribution relationship between the linear features and the spatial structure is used to correlate the main direction of Manhattan.The effectiveness of the structured feature fusion method in this paper is verified by comparison experiments on Euroc and Open LORIS-Scene public data sets.(3)Aiming at the verification requirements of key technologies such as spatial structure information extraction,tracking,localization and mapping in indoor environment,a robot system suitable for real indoor scenes is designed.The two-wheel differential chassis and Real Sense D435 i sensor were used to design the hardware platform of the robot system,and the visual localization and mapping system software was developed in the ROS architecture to realize the autonomous localization in indoor scenes.Based on the data series collected in corridors,halls and offices,the robot’s selflocalization error is achieved within 0.38 m at 120 m running distance,and the construction effect of structured feature map is effectively improved.Finally,the experimental results show that the proposed method has practical applicability and effectiveness. |