The high accurate measurement of aerial vehicles navigation parameter is the important foundation that ensure the safety of flight and the completion of flying task. Compared with traditional aircraft navigation methods, aircraft vision navigation passive receive the outside optical information via a camera, which has many advantages, including brief equipment, a good ability in concealment and anti- artificial interference, small volume and quality, low cost and power consume, abundant measurement information and high accuracy. The scene matching navigation is one of the aircraft vision navigation system which has been applied successfully, but the presently used scene matching system provide the level position of aircraft only, haven't make the most use of the information included in the camera real-time image.Based on scene matching system, this text advances a altimetry by matching several points in the single image and a altimetry by matching single point between the continuous two images. In the first method, four points in the real-time image are matched with the reference image, then the level ground coordinates of the feature points obtained from image matching and the camera interior parameter are used to calculate the altitude. In the second method, the central point of real-time image is matched with the reference image and the next frame real-time image, then the camera interior parameter, the ground displacement and real-time image pixel displacement between the two continuous real-time image are used to calculate the altitude. The practicability and reliability of the height estimation methods are proved by emulation on computer and experiment using sequence images obtained by camera on the aircraft.With regard to scene matching system using 2D reference image map, this text advances the least square LM method and direct linear transformation method for pose estimation. The first method calculate pose information iteratively using camera interior parameter and the level ground coordinate of not less than five points that are not coplanar. The second method calculate pose information directly and linearly using camera interior parameter and the level ground coordinate of not less than four points on the assumption that all the points are coplanar. For scene matching system using 3D reference image map, matching between real-time image and reference image map can provide 3D coordinate of the feature points. The least square LM method can calculate pose information iteratively using not less than three points and the direct linear transformation method calculate pose information using not less than six points that are not coplanar. The practicability of the pose estimation methods is proved by emulation on computer and experiment using single image.The integrated navigation between scene matching and inertial navigation is emulated on the computer, and a novel integrated navigation between inertial navigation and vision navigation which does not need reference image map is advanced. The new method tracks several feature points in real-time image and forms vision measurement equation based on the extended inertial navigation state equation which includes the several past positions information of the aircraft, and then the integrated navigation is completed by using Kalman filter method. According to emulation, the integrated navigation between none map vision navigation and inertial navigation can correct velocity error effectively and restrain the cumulation of position error very remarkably. |