Font Size: a A A

Learning To Return Table Tennis Ball For Robots

Posted on:2020-03-12Degree:MasterType:Thesis
Country:ChinaCandidate:L S JinFull Text:PDF
GTID:2428330572969955Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
Nowadays,robots play an more and more important part in our daily life.Making a real-time motion decision when interacting with the environment is one of the main challenges in robotics motion agent,which also has a great value in the fields of sports,industry and aeronautics.In this thesis,we focus on how to build a robot motion agent and propose a structure that com?bines the deep reinforcement learning framework with the thought of transfer learning algorithm.Then the structure is applied in robotic table tennis system to learn how to return a ball to any de?sired target point in the table.Based on this model,the action of the robot can be generated as far as the state of flying-spinning ball at the hitting plane is calculated.The main contributions of this thesis are as follows:1.The deep reinforcement learning method in the framework of Deep Deterministic Policy Gradient is applied to build the robot motion agent.Without dependencies on any prior knowledge,this thesis constructs a DDPG network and trains it in the simulation environment we build.Taking the predicted state of the incoming ball at the hitting plane as input,the agent generates an action.The results of experiment show that the action can return the incoming ball to the desired target point with high accuracy.Besides,the network takes about 0.47772ms in generating an action,which can meet the real-time requirement2.Based on the combination of DDPG and Progressive Neural Network,a learning framework named PNN-DDPG is proposed to accelerate the training process.This thesis constructs the PNN-DDPG network and trains it in the simulation environment.Instead of learning from scratch,the network can transfer the learned knowledge from the origin model which aims for single target.The results show that the framework of PNN-DDPG shows a much faster convergence rate and a better accuracy of returning the incoming ball than those of DDPG network.Besides,the thesis combines the ways of trajectory planning and collision detection in the process of training network.In this way,the output action of the model turns to be more reasonable.3.Based on the improvement ofthe Progressive Neural Network,a new transfer learning method,which can learn from mutiple origin models,is proposed.Through this new method,the the-sis successfully trains the robot motion agent in the simulation environment,which transfers the knowledge from two different models,The results of the experiments show that the new method which takes the advantages of mutiple models has a faster convengence rate.The new method costs about 6.1782ms in calculating an action,which also meets the real-time requirement.
Keywords/Search Tags:Robotic Table Tennis System, Motion Agent, Deep Reinforcement Learning, Transfer Learning, Deep Deterministic Policy Gradient, Progressive Neural Network
PDF Full Text Request
Related items