Font Size: a A A

Research Of Blind-guiding Robot Simultaneous Localization And Mapping Based On Heterogeneous Sensor Information Fusion

Posted on:2019-05-05Degree:MasterType:Thesis
Country:ChinaCandidate:Q LiFull Text:PDF
GTID:2428330566974067Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
The Blind-guiding Robot belongs to the service robot category.Compared with the biological and simple electromechanical Blind-guiding methods,the Blind-guiding Robot has the advantages of short training period,high intelligence and better guidance assistance for the visually impaired.In order to help the visually impaired to walk safely,the Blind-guiding Robot needs to know whether there are obstacles in the environment,the location of the obstacles and how to avoid the obstacles.The key technology to solve these problems is SLAM(Simultaneous Localization And Mapping).The current research on robot SLAM mainly includes SLAM based on laser sensor and SLAM based on vision sensor.In this paper,experimental studies have found that the environmental map constructed by laser sensors is accurate and real-time,But it can only detect obstructions in a specific plane,so it is easy to lose information on small obstacles in the environment.However,vision sensors can acquire rich environmental information.The construction of the environment map is relatively complete,but its measurement error is large and this make the map not accurate enough.Due to too much feature information,the calculation is excessive and sometimes the map will fractured.In summary,a single sensor has limitations to solving problems of robot Simultaneous Localization and Mapping.Aiming at above problems,this paper uses laser sensor and Kinect vision sensor simultaneously as sensing tools for Blind-guiding Robot.combine the advantages of the two sensors to obtain redundant information of the environment.At the same time,in order to avoid the problem of map fracture,a fusion algorithm for laser sensor and Kinect vision sensor is proposed.Firstly,the depth image acquired by the Kinect vision sensor is converted to three-dimensional point cloud image through a coordinate system,And by limiting the vertical direction,the filter limits the height of the three-dimensional point cloud through filtering independent point cloud information.And then the filtered 3D point cloud image is projected onto the horizontal plane and the boundary point cloud information is extracted and converted into the fake laser scanning data.Then,the weighted-average of the weighted factor coefficients is dynamically adjusted to fuse the fake-laser data converted by the Kinect vision sensor with the scanned data of the laser sensor.Finally,Output the fusion data,and together with the odometer data,realize the composition and autonomous navigation of the Blind-guiding Robot.In the experimental scenario built on its own of this paper,a single laser sensor,a single Kinect vision sensor,and a fusion method are used to build a map and an autonomous navigation experiment.The results show that the environment map constructed by the fusion method of this paper is more accurate and complete than another methods by using a single sensor,what's more the autonomous navigation is more secure and can better achieve the purpose of helping the visually-impaired person to walk safely.In addition,because the filter-independent point cloud information is designed in the algorithm,the amount of calculation is reduced,and the real-time requirements of the Blind-guiding Robot SLAM is ensured.So it is of great significance for the application and promotion of Blind-guiding Robot in China.
Keywords/Search Tags:Blind-guiding Robot, Robot Operating System, multi-sensor information fusion, building map, autonomous navigation
PDF Full Text Request
Related items