Font Size: a A A

Beetle Antenna Search Based Ship Motion Modeling And Collision Avoidance Methods

Posted on:2021-10-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:S XieFull Text:PDF
GTID:1482306497464714Subject:Traffic and Transportation Engineering
Abstract/Summary:PDF Full Text Request
In recent years,intelligent ship has become research hotspot in ship area.Accurate ship motion modelling and effective collision avoidance are very important for navigation safety.During voyage,the(hyper-parameter)optimization problems in ship motion modelling and collision avoidance will become more difficult due to the uncertain disturbances,non-uniform sampling,maneuvering hysteresis,rules and perception constraints,which may result in poor collision avoidance effect.In order to deal with the above problems,a novel beetle antenna search algorithm(BAS)and its swarm variant(beetle swarm antenna search,BSAS)are improved and combined with the existing methods,i.e.,model predictive control(MPC),extended state observer(ESO),least squares support vector machine(LSSVM),reinforcement learning,etc.,to carry out ship motion modeling and collision avoidance researches.The main work is as follows:1)To provide algorithm basis for ship motion modelling and collision avoidance,the ABAS/ABSAS(antenna performance-based BAS/BSAS)algorithms are proposed to improve the optimization performances of the original BAS/BSAS under the known constraints.2)To realize accurate ship motion modeling under different noises and nonuniform sampling disturbances,an identification method based on ABEL(ABSASESO-LSSVM)is proposed,which improves the accuracy and adaptability of original LSSVM.In the proposed method,an ESO observer and LSSVM are combined to realize continuous system identification,then the ABSAS algorithm are used to finetune the bandwidths in ESO adaptively.The effectiveness of the proposed ABEL method is verified based on simulation data and model ship experiment data.3)To realize effective ship collision avoidance under comprehensive considerations of the collision risk index,maneuvering hysteresis and COLREGs in open water,a collision avoidance method based on Q-ABSAS(Q-learning-ABSAS)optimization is proposed.In the proposed method,a rolling optimization strategy based on risk prediction is proposed,in which the optimization problem is solved by a small population ABAS algorithm with Q-learning adaptation.Besides,the decision results of Q-ABSAS can be approximated by the inverse model in multi-ship encounters to reduce the time cost.4)To realize stable and effective ship collision avoidance under limited perception,a decision method based on ABAS-DDPG(deep deterministic policy gradient)reinforcement learning is proposed for ship collision avoidance,which improves the exploration and learning performance of original DDPG.In the proposed method,the DDPG reinforcement learning and meta-learning are exploited to obtain an optimal collision avoidance strategy under unknown environments.Then,the ABAS is used to optimize the noise injection process in DDPG adaptively for effective exploration.The effectiveness of the proposed ABAS-DDPG method is verified in collision avoidance simulations and model ship semi-simulations.5)To realize approximate optimal path optimization in real-time without model basis,a fast path optimization method based on parallel D-A3C-ABAS(distributed A3C-ABAS)is proposed,which balances the real-time ability and approximate optimality in a certain range.In the proposed method,a rolling path optimization strategy is proposed refer to the idea of MPC,which obtains the approximate optimal paths in the predictive horizon in real-time.Then,a parallel distributed D-ABAS(distributed ABAS)optimized by A3 C reinforcement learning is used to solve the rolling optimization problem efficiently and adaptively.The effectiveness of the proposed D-A3C-ABAS method is verified in path planning simulations with comparisons of BSAS,PSO and APF methods.
Keywords/Search Tags:intelligent ship, motion modeling, collision avoidance, beetle antenna search optimization, reinforcement learning
PDF Full Text Request
Related items