| The adaptive grasping ability of the manipulator is the foundation for complex manipulation ability,whose external environment interaction and perception ability play an import role.Because the objects in the unstructured environment have obvious differences in materials,shapes,types,surface characteristics and so on,which pose a greater challenge to the robot’s perception ability.Meanwhile,it is difficult for robots to deal with the increasingly complex grasping like human hand using the only visual perception technology.Thus,for thransfering the operating ability of human hand to robots,how to comprehensively use the tactile perception information and visual perception information of the manipulator is still very difficulty in robot adaptive grasping research.Therefore,it is of great significance to carry out the research of robot adaptive grasping based on vision and tactile perception.Based on yolov5 n,spatiotemporal convolution neural network,multi stream neural network and other deep learning models,the research of contact area detection and sliding detection with tactile perception and comprehensive utilization of visual touch information is carried out.As a result,the adaptive grasping ability of the manipulator for objects of different shapes,weights and soft and hard is improved.The main work of this paper is as follows:(1)Multi scale tactile images captured by the manipulator.In view of the strong dependence on high-precision sensors and the problems of complex technology,low resolution and high cost in the traditional array tactile image acquisition device,a lowcost,simple and efficient tactile image sequence acquisition device for image observation is developed to get the tactile image information with the silicon gel projectile observation principle.On this basis,a feature extraction algorithm of multi-scale tactile image sequence based on orb feature matching algorithm is proposed,which can enhance the difference of collected data and improve the efficiency of subsequent model training.(2)Tactile contact perception model with Yolov5 n.Because the tactile feature extraction using traditional image processing is easily affected by uneven illumination of light sources,a tactile contact perception model based on yolov5 n is constructed.By using the method of transfer learning to train the model,the accuracy of the captured area detection can reach 97%,which has good real-time performance.(3)Tactile sliding detection model based on separable convolution and improved spatiotemporal convolution model.Because the detection data of the manipulator’s grasping and sliding is difficult to be obtained,the video data enhancement algorithm based on tactile contact area detection is proposed to establish tactile sliding data set.Because the space-time neural network has the problems of large network model and large consumption of computing resources,a lightweight spatiotemporal convolutional neural network based on separable convolution is proposed.Based on these,a lightweight sliding detection model is constructed.The simulation results show that the size of the sliding detection model is reduced to 5% of the original,and the sliding detection accuracy reaches 97%.(4)Visual touch sliding perception model based on dual flow neural network.Aiming at the low efficiency of the comprehensive utilization of visual and tactile information in the grasping process of the manipulator,a dual flow neural network model is constructed to enhance the ability of the manipulator to sense sliding.Aiming at the redundancy of visual touch information in the process of robot grasping,a dynamic gating structure of information flow based on attention mechanism is designed,which improves the computational efficiency of the model.(5)Adaptive grasping experiment of manipulator.In order to verify the effectiveness of the algorithm and the model,a robot adaptive grasping experimental platform integrating vision and touch is built,and grasping experiments are carried out on objects of different shapes,sizes,weights and soft hardness.For objects of different types,shapes,weights and soft hardness,the adaptive grasping success rate of 85% can be achieved comprehensively.The experimental results verify the feasibility,effectiveness and progressiveness of the proposed adaptive grasping model and algorithm of manipulator based on vision and touch.In view of the insufficient adaptive performance of the manipulator when grasping unknown objects in the unstructured environment,the research on the sensing ability of the manipulator was carried out.In which,it reduces the dependence on high-precision sensors in the grasping perception of the manipulator,enhances the adaptability of multi tasks,and provides a new method and theory for the robot to perform intelligent grasping tasks. |