Font Size: a A A

Research On Vision Based Unknown Object Recognition And Autonomous Robotic Grasping

Posted on:2019-11-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z C WangFull Text:PDF
GTID:1368330590972878Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
With the development of robotics and information technology,vision-based unknown object recognition technology and autonomous robotic grasping technology,as the key capabilities of robot in human-robot collaboration tasks,attract more and more attentions.The technologies can be applied in many scenarios,such as assisting medical,smart home,smart factory,etc.,which greatly expands the application scopes of robots.However,compared to laboratory environment,it is difficult to apply these technologies in real world environments,due to the complex background,unknown model of objects and differences between individual behaviors.Therefore,achieving autonomous robotic grasping of target object based on human behavior understanding in real world condition is still an open problem in the field of robotic research.Accurately obtaining unknown object information and understanding typical human behaviors are the core capabilities for a robot to perform autonomous grasping tasks.The unknown object recognition part enables the robot to make grasp-layer decisions in autonomous robotic grasping tasks,including discrimination of graspable objects,category recognition and grasping area recognition of the graspable objects.The human pick-place behavior recognition part,accomplishing typical grasping,placing and moving objects behaviors recognition,enables the robot to make upper-layer decisions in autonomous robotic grasping tasks.In this dissertation,3D visual camera is utilized as main sensing tool for the robot.The main contents of this dissertation are the visual recognition problems in autonomous robotic grasping tasks,which are divided into three key issues,including graspable objects recognition in real world environment,grasping area detection of unknown objects,and typical pick-place behavior(grasping,placing and moving object)recognition.In the beginning,the human pick-place behavior recognition part provides target object for the autonomous robotic grasping task.Then,the graspable objects recognition part searches the graspable target object for the robot in real world environment.Finally,the grasping area detection part obtains the grasping area and pose of the target object and generates grasping posture for the robot.This dissertation will study the above three key issues.In order to solve the poor performance and weak generalization capability of traditional object recognition algorithm in lacking training data condition,a hierarchical feature and multi-tasks learning mechanism based graspable object recognition algorithm is proposed.First,the image feature descriptors are learned from limited training data based on the hierarchical feature learning method,including shadow kernel features and self-learning features.Then,the graspable object recognition problem is divided into two phases,the discrimination of graspable objects and category recognition.In addition,a coarse-to-fine multi-tasks learning mechanism is designed to optimize the multi-tasks.Finally,a multi-tasks loss function is proposed to ensure the two tasks are optimized simultaneously during the model training.The discrimination result of graspable objects is prerequisite of the subsequent robotic grasping area detection task,and the category recognition result is utilized to obtain the area of target object in the image.Experimental results show that the proposed method obtains high computation efficiency and high accuracy,and achieves a satisfying result in real world experiment.In order to solve robotic grasping detection problem of unknown object,a deep convolutional neural network structure based grasping area recognition method is proposed.The method takes one pair of RGB-D images as inputs,and outputs the grasping area and corresponding grasping pose.First,the robotic grasping detection problem is formulated as a grasping area recognition problem,and the deep convolutional neural network model is utilized to solve this problem.Then,a multi-channels visual information fusion method is designed to enhance the capability of handling multi-channel visual information,which notably reduces the overfitting risk for the model.Finally,a graspable candidate generation method based on feedback mechanism is designed to search the graspable area with maximum output probability.In addition,the robot estimates the six-degree pose of the grasping area and generates grasping posture for the robot.Experimental results show the proposed method achieves a satisfying result in robotic grasping detection of unknown objects experiment,and verify the effectiveness and robustness of the method.Human behavior is one kind of highly uncertain sequential information,due to the huge difference between individuals.In order to understand the typical grasping,placing and moving objects behaviors,a modified recurrent convolutional neural network model is proposed to recognize human pick-place behavior.This model formulates the human pick-place behavior recognition problem to an end-to-end trainable encoding-decoding problem.By using the convolutional neural network and long short-term memory network,this model is able to obtain spatial-temporal abstract features of the human pickplace behavior.In order to tackle high noise and ambiguous information problems in the beginning of sequential images,a new loss function of the model is designed.The loss function is able to recognize the behaviors by using incomplete image sequence and force the model to output the result as soon as possible.Experimental results show the proposed method has high robustness to noise and obtains superior performance in grasping,placing and moving objects behaviors recognition tasks.The autonomous robotic grasping testbed is set up to verify two experiments,the robotic grasping of unknown objects and the autonomous robotic grasping task with human in the loop.The autonomous robotic grasping strategy is designed based on human pick-place behavior recognition result.The experiments show that the robot is able to grasp the unknown objects successfully,and the robot accomplishes autonomous robotic grasping task based on human pick-place behavior recognition.
Keywords/Search Tags:unknown object, human pick-place behavior, hierarchical feature, multi-tasks learning, incomplete image sequence, autonomous robotic grasping
PDF Full Text Request
Related items