Font Size: a A A

Research On Unknown Environment Exploration And Perception Based On Hand Gesture Interaction For Mobile Robots

Posted on:2018-09-29Degree:MasterType:Thesis
Country:ChinaCandidate:X Y LiFull Text:PDF
GTID:2348330533969080Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
With the progress of robot technology,the demand for mobile service robots is growing rapidly.Mobile robot environment awareness technology has become a new research hotspot.In recent years,SLAM technology which is the key technology of environmental awareness has made great progress,the real-time localization and mapping capacity has been greatly improved.However,the efficiency of mobile robot first time exploring in the unknown environment is low.The robustness of the mapping is poor.The accuracy of the map is low.The use of the map is limited.This paper aims to building a practical robot environment awareness system which could explore the unknown environment and mapping.And the robot can aware the object in the 3D map.In order to improving the efficiency of robot autonomous environment exploring,we developed a natural human-robot interaction(NHRI)system to guide the robot to complete the environmental exploration.Natural human-robot interaction system can effectively reduce the learning costs of the user.The users can operate the robot intuitively.The NHRI system use virtual reality technology as a visual feedback,the hand gesture interaction as a control method,to achieve the natural interaction between human and robot.The robot need to build the map when it is exploring the environment.In order to obtain more detailed environmental information,we learned from RGBD SLAM and ORB SLAM.The RGBD SLAM can build a dense 3D map.The ORB SLAM is more accuracy and robust.So we proposed a RGBD and ORB SLAM fusion to achieve a dense three-dimensional point cloud map building.In order to make the robot truly understand the environmental information,this paper proposed an object segmentation and feature extraction algorithm.The object segmentation algorithm use the lattice analysis method to analyze the points in 3D point cloud map to obtain the possible distribution of objects and then to achieve the object segmentation.When the segmentation is completed,the object location and feature descriptor could be computed from the lattice information.We can get the object category information with the help of Bag of Word model.The semantic index can be constructed by binding the class information and position of the object.Finally,this work constructed a framework system to verify the feasibility and ease of use of the natural human-robot interaction system.We use the fusion SLAM constructed a dense 3D point cloud map.The object training set and test set are constructed on the dense three-dimensional point cloud map with the help of object segmentation algorithm.We use the training set to complete the training of the classifier and use the test set to verify the validity and accuracy of the object recognition algorithm.
Keywords/Search Tags:Object recognition, SLAM, VR, Gesture Recognition, 3D object segmentation, 3D point cloud feature extraction
PDF Full Text Request
Related items