| Approach and landing are the flight stages with the highest accident rate in the whole flight cycle,which require the airborne navigation system not only to provide accurate motion estimation for the aircraft,but also to present a clear and accurate front view for the pilot.In degraded visual environment(DVE),enhanced synthetic vision system(ESVS)enables the pilot to have an equivalent visual operation(EVO)ability,which can avoid controllable flight in terrain(CFIT)accidents effectively.However,the existing enhanced synthetic vision technologies rely on the high-precision pose parameters provided by the inertial/satellite integrated navigation.When there is no satellite navigation or the accuracy of satellite navigation is insufficient,the enhanced image and the synthetic vision will be mismatched,which can mislead the pilot.This thesis focuses on the research of approach and landing navigation accuracy improvement method based on the development of ESVS,which solves the problems of real-time accurate runway detection,autonomous accurate motion estimation and accurate registration between real and synthetic images during the aircraft approach and landing under the condition of low visibility and non satellite navigation.The main innovative contributions of this thesis are as follows:1.In this thesis,the method of detecting and extracting the runway contour features from the forward looking infrared(FLIR)image is studied.In order to solve the problem that the existing runway detection algorithms can not detect and extract the runway features in real time,robustly and accurately from FLIR images with a large field of view and low resolution,this thesis proposes a runway detection method assisted by airborne navigation sensors.Under the condition of known airport geographic information,a visual projection model from the world coordinate frame to the pixel coordinate frame is established.It is driven by the airborne navigation data to estimate the runway region of interest(ROI).Then the line segments are extracted from the runway ROI by EDLines detector and fitted into the left and right sides of the runway.In order to solve the problem of inaccurate estimation of runway ROI caused by the error of airborne navigation parameters,an improved method of runway detection based on depth convolution neural network is proposed.The runway ROI can be detected accurately and quickly by using the one-stage target detection algorithm YOLOv3,and the line segments can be extracted accordingly.The weight of each line segment is calculated according to its length and width.Then,according to the weight value,the discrete points are selected from each line segment to fit the runway contour features.Compared with up-to-date algorithms,the improved runway detection method exhibits higher detection accuracy,faster detection speed and stronger robustness,which solves the problem of low performance of runway detection due to low resolution,few textures,non-uniformity of FLIR image and homomorphic feature interference.2.This thesis studies and proposes a visual-inertial integrated navigation method based on the homography matrix.The homography matrix is constructed from the real runway features detected in FLIR image and the synthetic runway features generated by vision projection.The vision measurement equations based on the homography matrix and the system propagation equations based on the inertial error transfer model are established.Under the framework of the square-root unscented Kalman filter(SR-UKF),a visual-inertial integrated navigation model is designed,which can solve the problems of insufficient navigation accuracy,low data update rate,poor robustness and dependence on artificial landmarks when the existing vision navigation methods are applied to the aircraft approach stage(flight height drops to 200 ~ 60feet).However,it is difficult to extract the runway features accurately when the aircraft is far away from the runway,and the accuracy of integrated navigation is low due to the homography matrix errors.To address the above issue,this thesis proposes a visual-inertial integrated navigation method based on the improved runway sparse features.The triangular vertex coordinates enclosed by the left,right edge lines and the front edge lines of the runway in FLIR image are used as the vision measurement information,and the vertex coordinates generated by the vision projection model are used as vision prediction information,and then the vision menasurement equations are simplified.Compared with the typical methods,the improved integrated navigation method has high accuracy and robustness,and can meet the requirements of precision approach for aircraft in low visibility weather and the absence of satellite navigation assistance.3.In this thesis,a visual-inertial integrated navigation method based on direct sparse odometry(DSO)is proposed.DSO algorithm is used to estimate the camera pose from FLIR sequence image.Combined with the calibrated relative pose from inertial measurement unit to camera,the prediction value of the camera pose is derived.Then the vision measurement equations based on camera pose are estiblished.In addition,the strapdown inertial error transfer equations are taken as the system propagation process.Therefore,a visual-inertial integrated navigation model based on the SR-UKF framework is constructed,which can solve the problems of incomplete runway contour in FLIR image during the final landing stage(flight height drops to 60 ~ 0 feet)and lack of ground cooperative beacon.The experimental results show that the proposed method has high navigation accuracy and robustness,and can meet the requirements of precision landing of aircraft under the conditions of low visibility and without satellite navigation.4.In view of the fact that most domestic airports lack the support of ground-based augment satellite navigation,the existing ESVS can not achieve the accurate registration between synthetic and real images.This thesis adds the visual-inertial navigation into the existing ESVS,and proposes an ESVS framework suitable for domestic airports conditions and an accurate registration method based on visual-inertial fusion.The visual-inertial integrated navigation method based on improved runway features is used to accurately estimate the camera pose to drive the 3D digital map engine to generate accurate synthetic vision.Then the FLIR image and synthetic vision are superimposed to display.In this thesis,the minimum vision deviation recommended by Radio Technical Commission for Aeronautics(RTCA)standard DO-315 B is transformed into the maximum allowable deviation of the image center pixel to evaluate the registration performance quantitatively.Compared with the state of the art work,the proposed method has higher registration accuracy and achieves the accurate registration between real and synthetic images in the case of lack of ground-based satellite navigation in domestic airports. |