| Unmanned Surface Vehicle(USV)is a kind of unmanned surface platform with functions such as autonomous navigation,autonomous obstacle avoidance,and autonomous identification and detection.It is increasingly used in the exploration and development of marine resources and in the military field.Path planning is an important manifestation of the intelligence of USV,and the intelligence of path planning will greatly improve the autonomous performance of USV.This paper research on the path planning methods of the USV from four aspects including global path planning,local path planning,obstacles avoidance and the path planning with constraints condition of multi-tasks.In the research,this paper used the Bayesian Networks(BN)and Reinforcement Learning(RL)to improve the intelligent level of path planning methods of the USV.And the main works of this thesis are as follows:A global path planning method for USV based on BN-A*algorithm to solve the safety problem of global path planning of USV using A*algorithm is proposed.The gried-based environment model is constructed on the base of electronic chart,and the dangerousness degree of the gried-based environment model is predicted based on BN algorithm.The safety cost is added to the evaluation function of A*algorithm and the improved A*algorithm is used to search path in order to guarantee the safety of the path.The validity of the proposed algorithm is verified by experiments in simulations.To guarantee the stability of the environment model of the local path planning,we proposed a method to constructed environment model which considered the error of the perception information.We constructed a new mechanism of update the obstacles in the environment model to make sure the stability of the obstacles in local environment model.To solve the drawback of the traditional artificial potential field(APF)algorithm,an improved APF algorithm is proposed.We modify the repulsive force function to solve the problem that USV can not arrive the goal point if there are obstacles near the goal point.To solve the problem that USV may occur collision when USV is far from goal point,the attractive force function is modified.Because the APF easily plunging into local optimal solution,we bring in the Simulated Annealing(SA)algorithm.The validity of the improved APF algorithm is verified by experiments in simulations.Then this paper research on the static obstacle avoidance method and dynamic obstacle avoidance method of USV.Collision cone theory is used to solve the danger avoidance of USV in the environment of multiple static obstacles.The USV can avoid dynamic obstacles based on maritime rules.Simulation experiments have verified that USV are able to complete danger avoidance in the environment of static obstacles and dynamic obstacles.This paper presents a path planning algorithm for USV based on reinforcement learning with multi-task constraints.Training based on Q_learning algorithm to complete path planning of USV under multi-task constraints.To avoid the problem that Q_learning algorithm converges slowly under multi-task constraints,a Q_learning algorithm based on task decomposition reward function is proposed.In the setting of 2018 Maritime RobotX Challenge,the feasibility of using reinforcement learning to perform path planning under multi-task constraints is verified by simulation experiments,and the physical experiments is carried out to verify that the algorithm can meet the actual requirements. |