Font Size: a A A

Research On Autonomous Docking Technology Of The Multi-modular Flying Vehicle Based On Visual Guidance

Posted on:2024-08-02Degree:MasterType:Thesis
Country:ChinaCandidate:L P HuFull Text:PDF
GTID:2542307157475494Subject:(degree of mechanical engineering)
Abstract/Summary:PDF Full Text Request
The multi-modular flying vehicle is modular in design and can flexibly switch between two modes of ground and air travel,which is regarded as an important solution for future urban three-dimensional transportation.At present,the research on autonomous guidance docking technology,which is relied on in the modal switching of the multi-modular flying vehicle,is still in its initial stage,and the existing technology on autonomous guidance landing of UAVs cannot meet the requirements of modal switching due to insufficient guidance accuracy,single observation data and insufficient robustness.In response to the above problems,this paper proposes a vision-guided autonomous docking technology for the multi-modular flying vehicles,which divides the guided docking process into three stages: remote,mid-range,and near-range,with RTK-GPS for remote guidance,visual target detection and positioning algorithm for relative position measurement between mid-range and near-range modules,and PID control to eliminate the relative deviation between modules and finally completes the modal conversion of the multi-modular flying vehicles.Firstly,for the design of the visual target detection and localization algorithm,the target module is used as the recognition object in the mid-range guidance stage,and the combined yolo-fdsst algorithm is used to initialize fdsst by yolo and selectively update the fdsst tracking results intermittently to ensure the real-time nature of the algorithm as well as the tracking accuracy,while reducing the problem of tracking due to yolo misdetection and omission.The algorithm is updated selectively to ensure the real-time algorithm and tracking accuracy,while reducing the interference of tracking results due to yolo misdetection and omission.The nested Ar Uco tags arranged on the docking mechanism of the target module are used in the proximity guidance stage to provide relative positional information,which can adapt to the changes in the camera field of view and improve the robustness of recognition,and a ROI-based Ar Uco video stream detection acceleration method is proposed with a 27.7% improvement in the detection frame rate.Secondly,the solution of low illumination image enhancement is proposed to address the problems that the visual target detection algorithm in this paper may have poor detection effect or even failure in low illumination environment.The RPCE algorithm based on Retinex theory is pre-processed to the camera acquisition images in low illumination environment,so that the target module as well as Ar Uco tags have better recognizability to ensure the effectiveness of the visual target detection algorithm in low illumination environment.Then,for the problems of noise and discontinuity in Ar Uco observation data due to the change of flight module attitude,the performance of the algorithm and the influence of environmental factors such as light and wind in the proximity guidance phase,a solution based on multi-sensor data fusion is used to fuse Ar Uco and RTK-GPS observation data by traceless Kalman filtering to obtain continuous,stable and highly accurate relative positional data.Finally,the feasibility of the yolo-fdsst target detection and tracking algorithm,Ar Uco target detection algorithm,low-light image enhancement algorithm and multi-sensor data fusion algorithms were initially verified on the Air Sim UAV simulation platform.Then the algorithms were ported to the M210 flight platform for experimental verification.Finally,the principle prototype of the flying vehicle was built and the complete autonomous guidance process was verified.
Keywords/Search Tags:Multi-modular flying vehicle, Visual navigation, Low-light environment, Multi-sensor data fusion
PDF Full Text Request
Related items