Font Size: a A A

Design And Implementation Of Environment Based Obstacle Avoidance Path Planning System For Unmanned Vehicle

Posted on:2022-02-09Degree:MasterType:Thesis
Country:ChinaCandidate:J X LinFull Text:PDF
GTID:2518306524990519Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
With the development and popularization of artificial intelligence technology,the application of unmanned vehicles has become more and more extensive,which provides more intelligent means for mountain detection and exploration.However,the mountain environment is complex,the altitude changes greatly,and the weather conditions are harsh.How to ensure that there is no It is a basic requirement for people and vehicles to travel quickly and accurately,completing tasks.This paper analyzes the current research status of path planning and reinforcement learning at home and abroad,and introduces the relevant technical principles involved in environmental modeling,global path planning based on deep reinforcement learning methods,and local path planning based on artificial potential field methods.The design and realization function of the unmanned vehicle path planning system.This article mainly completes the following tasks:1.Propose environmental modeling methods suitable for mountain environments.This method is based on the grid modeling method,combined with the elevation data sampling points obtained during grid division,the resolution of the elevation data is the grid size,the sampling point is the center of the grid,and the latitude and longitude of the elevation points are calculated based on the geohash algorithm.Encoding,using this encoding as raster encoding,using raster expansion technology to describe obstacles in the raster.2.Improve DQN algorithm to realize global path planning in complex mountain environment.The first is to calculate the grid action cost of each grid with respect to the action space in the workspace after the grid modeling is completed,the neural network is trained based on the action cost of each grid,and improve the action selection strategy in DQN based on the action cost of each grid,taking ε-greedy as The action selection is based on the principle.After the action selection is completed,the next state is given according to the action space of the grid,the action value function is defined according to the grid cost,and the reward for each action is determined according to the action value function.Actions with low cost Obtain higher rewards,expensive actions get less rewards,and finally obtain the largest reward as the goal for gradient descent optimization,record each state transition,and output as a global path.3.In the local path planning part,in order to overcome the shortcomings of the traditional artificial potential field method,based on the grid action cost proposed in the global path planning,the traditional artificial potential field method is improved,and a distance parameter is introduced.The latitude and longitude calculate the actual distance between the planned path point and the target point,which reduces the calculation pressure when the algorithm is running.Through the judgment and calculation of the distance,the artificial potential field method is optimized to solve the local extreme value problem and the target unreachable problem...
Keywords/Search Tags:mountain environment, environmental modeling, path planning, deep reinforcement learning
PDF Full Text Request
Related items