Font Size: a A A

Research On Methods Of Learning And Controlling For Mobile Robot In Unknown Environment

Posted on:2015-05-01Degree:MasterType:Thesis
Country:ChinaCandidate:X L TongFull Text:PDF
GTID:2298330467954946Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Mobile robot is an important research orientation in the field of robots and is a multi-technology combination of artificial intelligence, information detection, information processing, intelligent control,et al. The robots often need to work in an unknown environment in practical applications and it has great practical significance to study the learning and controlling of mobile robot in an unknown environment. The self-learning ability of mobile robot is the key to the success of its work in an unknown environment, the method of learning and controlling for mobile robot in unknown environment is studied in this paper. The main work and results are as follows:.1. For the problems existing in the traditional artificial potential field, an improved method of artificial potential field is proposed and targeted initiatives are taken to solve the problem of destination unreachable and the problem of local minima. The task for mobile robot to reach the objective in an unknown three-dimensional simulation environment is achieved based on this method.2. For the algorithms of general reinforcement learning only applies to the situations of the environmental state space is relatively small and the agent behavior choices are relatively simple, they have poor performances for the environments of continuous or high-dimensional environmental state space. An optimization algorithm with Dyna structure based on univector field and heuristic planning iS proposed to greatly reduce blind searching of the state space. It introduces univector field and heuristic planning to the Dyna framework simultaneously.3. For the problem of’curse of dimensionality’which is caused by the continuous state space, a new method to define the state space of the environment is proposed to discrete the continuous state space. Meanwhile, for the problem of low learning efficiency in the process of reinforcement learning, heuristic reward is introduced to the reward function to reduce the probability of blind searching, and make the action selection more purposeful. For the practical application scenarios, a Q-learning algorithm for robot navigation based on a new definition of an unknown environment state and using heuristics reward is proposed, which has good adaptability and generalization ability in unknown environment and achieves good results in the application of an unknown dynamic environment.4. Many reinforcement learning algorithms are achieved in simulation platforms such as Matlab, which has a larger distance between the practical application and limits the portability of the algorithm. The experiment of this study is executed on Simbad simulation platform and the algorithm is programmed with Java. A complex three-dimensional environment is built with the Simbad platform where the robot gets the environmental information through a variety of sensors and it is of greater reference value for the actual application in the real environment.
Keywords/Search Tags:unknown environment, learning and controlling, artificial potential field, reinforcement learning, Simbad platform
PDF Full Text Request
Related items