Font Size: a A A

RGB-D Sensor Based Robot Grasping In A Simulation Environment

Posted on:2022-05-02Degree:MasterType:Thesis
Country:ChinaCandidate:KALYAN SINGH KARKIFull Text:PDF
GTID:2518306572965659Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
Robots are being deployed continuously in various sectors in today's world.Robot grasping is a basic skill required for a robot to operate in its workspace.Vision plays a vital role for a robot to acquire necessary information of its surroundings.Recent developments in deep learning-based methods for computer vision task has drawn attention of researchers in robotics field to utilize these methods in vision-based grasp detection tasks.RGB-D sensors have added new dimensions to the field of vision related tasks.Robots that can be used in variety of tasks without manually programming it every time,saves many problems and time.Using vision sensor to acquire data and utilizing it for grasp prediction is getting attention now because it helps in task generalization.Convolutional neural networks(CNN)have achieved state-of-the-art results in computer vision related tasks,so robotics researchers are also adapting these methods for the benefits of robotics developments.Predicting grasps in an end-to-end manner from the visual information is quick and it helps in real-time operation of the robots.In this research work we explored various deep learning-based grasping methods and proposed a two-staged network for the object detection and grasp prediction task.We utilized the state-of-the-art CNN to achieve our networks design.Our model for object detection task was trained totally using synthetic data images obtained from the Coppelia Sim simulator.This research studies the advantage of using RGB-D images for the training of neural networks.To evaluate the performance of the proposed model on grasping task,we performed the experiments using a Jaco arm to grasp six different objects in the Coppelia Sim simulation environment.The experimental results showed improvement in the grasp detection tasks up to 12 % while using RGB-D images for training CNN in comparison to using only depth images.Using our proposed two-staged network,we achieved success rate of 84.45% in object detection task and improved the grasping capability by 13.33% for an existing network in single object grasping.Similarly,grasping in clutter task also showed improvement as our proposed model has higher accuracy than the baseline model,we achieved object grasping in around 1.72 attempts whereas the baseline model had taken 2.11 attempts for an object before it was removed from the workspace.Thus,it can be concluded that addition of object detectors along with grasp prediction networks increases the efficiency of grasping for robotic manipulation.
Keywords/Search Tags:RGB-D, visual grasp detection, deep learning, robotic manipulation, simulation
PDF Full Text Request
Related items