Font Size: a A A

The Research Of Robotic Object Grasp Based On Visual-Tactile Fusion

Posted on:2018-08-10Degree:MasterType:Thesis
Country:ChinaCandidate:D L LuFull Text:PDF
GTID:2428330542976905Subject:Computer technology
Abstract/Summary:PDF Full Text Request
At present the robot is usually equipped with a variety of sensors to achieve the fine operation,if the various types of sensors only use different modes of independent applications to perceive the surrounding environment,for example,perception of shape,color though visual sensors and perception of roughness,hardness and Temperature though tactile sensors,it will cut off the internal relationship between multi-modal information,which seriously reduce the degree of intelligence-aware action.In order to provide accurately information on the state of the operating device itself,the position and properties of the manipulated object,it is necessary to study the theory and method of visual and tactile multimodal fusion to perceive the 'operating object from different aspects.The perception of vision-tactile fusion has attracted great attention in the field of robotics.According to the operation of the robot can be divided into autonomous robots and teleoperation robots.Therefore,this paper mainly focuses on robotic object grasping based on visual-tactile fusion from these two aspects.In the aspect of autonomous robot,this paper aims at the following problems in the process of vision-based robotic object grasping:(1)The scope of the camera can be observed limited or occluded by the robot arm and other factors lead to the robotic arm cannot perceive robot manipulator position timely;(2)during the grasp process cannot determine whether the operation object is lost.In this paper,tactile-assisted visual robotic grasping strategy is proposed.Firstly,the target position is determined by the visual information,the main direction estimation and the grasping point detection.Then,though a large number of grasping experiments to set the corresponding thresholds,to assist the robot to determine whether the robotic arm grasp the object and the object is falling,which can re-locate the object position re-grasp,making robots in the process of object grasping more efficient and real-time.In teleoperation robots,this paper mainly deals with visual touch fusion based on tactile gloves.Established a data set include the vision-tactile data of the fifteen objects.Use covariance descriptor representation image characteristics,and use Dynamic Time Warping model to represent tactile sequence,the visual feature and the tactile sequence feature are merged with the characteristic of the tactile sequence.Then,the features of visual image and tactile sequence are merged,finally,the ELM(Extreme Learning Machine)is used to construct the visual fusion classification model.Finally,this paper also studies control actions recognition based on data gloves,because in teleoperation,the meaning expressed by a particular set of control action sequences can be predefined,so that the meaning of the control action can be expressed more specific,some simple action will allow the robot to perform a series of complex operations.In this part,we use the method of kernel ELM to perform static and dynamic gestures recognition respectively.The experimental results show that the classification effect of kernel ELM is better than ELM and SVM(Support Vector Machine).
Keywords/Search Tags:vision-tactile fusion, autonomous operation, extreme learning machine(ELM), teleoperation, data glove
PDF Full Text Request
Related items