Font Size: a A A

Research On UAV Visual Inertia Fusion And Life Search Algorithm

Posted on:2024-09-07Degree:MasterType:Thesis
Country:ChinaCandidate:W K MaFull Text:PDF
GTID:2542306917470304Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
The frequent occurrence of natural disasters and man-made distress events in the world has brought great challenges to the search and rescue work.Researching a fast and accurate method of searching for trapped people can minimize losses.Traditional human detection methods are inefficient,cover a small area and have security risks.With the continuous progress of technology and the decrease in cost,unmanned aerial vehicles(UAVs)have been widely used in civil field.The combination of UAVs and life detection sensors has rapidly become a research hotspot due to its high efficiency,strong real-time performance,high safety,and low cost.However,UAVs still face the problem of weak location signals in complex environments during the life search process and the limitation of carrying a single life detection sensor.In order to solve these problems,this study focuses on the UAV visual positioning algorithm and search strategy.The main contents include:(1)Determine the scheme of UAV life searchAiming at the problem of inaccurate positioning caused by weak GPS signals of UAV in complex environments,a positioning algorithm based on visual-inertia fusion is studied to accurately complete pose estimation.In addition,aiming at the instability of single sensor in life search in complex scenes such as wild and post-disaster area,a recognition method combining infrared imager information and audio sensor information is studied to accurately complete life search.(2)Design an improved visual odometer for line feature extractionThe Shi-Tomasi method is used to extract point features,which can suppress pixel noise,and the feature points extracted from adjacent frames are tracked and matched by KLT optical flow method;By improving the line feature extraction algorithm LSD,short line segment features are eliminated and similar line segments are merged,and LBD descriptors are used to track and match the above line features;Then,the reprojection error models of point-line features are constructed for back-end optimization.And designed point line feature extraction experiment for verification and analysis.The results show that the number of line feature matching is improved in three different scenarios,which ensures the accuracy of tracking and matching.(3)Research on visual-inertial tightly-coupled positioning frameworkThe point-line feature error,IMU residual and prior information are integrated to construct the objective function to be optimized.The sliding window method and marginalization strategy are introduced for nonlinear optimization.In order to align the timestamp of IMU information and camera information,the IMU inertial measurement unit is integrated in advance,and the initial pose is estimated by joint initialization.(4)Research on life search algorithm based on multi-sensor fusionThe study focuses on lightweighting the ResNeXt neural network to extract the deep features of the data and improve the running speed of the network.The one-dimensional ResNeXt network is used to extract the deep features of melspectrogram coefficients,and the two-dimensional ResNeXt network is used to extract the deep features of infrared images;The above high-dimensional features are fused by discriminant correlation analysis,and the correlation and category of the fusion features are obvious;Then,the fusion features are input into the support vector machine classifier to complete the final decision,and the audio and image multimodal datasets are constructed for experimental verification.The results show that this method is superior to other methods in feature extraction and feature fusion,and the accuracy of multi-sensor fusion recognition can reach 98.7%.It proves that the method can effectively improve the accuracy of human detection in special scenes,and the detection effect of multi-sensor fusion is better than that of single sensor.(5)Build the system platform and conduct experimental verification.Built an UAV experimental platform for life search and configured the ROS simulation environment.The MYNY S1030-120 binocular camera embedded in the IMU module is used to complete the positioning,and the external parameters of the camera and IMU are calculated by joint calibration.The accuracy of the visual-inertial fusion algorithm is verified on the EuRoC dataset and the effectiveness of the life search algorithm is verified on the self-built multimodal dataset.The results show that the positioning accuracy of the VIO algorithm designed in the study is improved in any environments,the RMSE value of absolute pose is reduced by more than 0.9%,and the accuracy of life search is improved.The research results of the study solve the problem of positioning and detection of UAV life search,which is helpful to complete the intelligent life search task of wild and post-disaster scenes more efficiently and accurately.
Keywords/Search Tags:unmanned aerial vehicle, neural network, visual-inertial odometry, life search, multi-sensor fusion
PDF Full Text Request
Related items