Font Size: a A A

Vision aided inertial navigation system augmented with a coded aperture

Posted on:2012-12-16Degree:Ph.DType:Dissertation
University:Air Force Institute of TechnologyCandidate:Morrison, Jamie RFull Text:PDF
GTID:1458390008494273Subject:Engineering
Abstract/Summary:
Navigation through a three-dimensional indoor environment is a formidable challenge for an autonomous micro air vehicle. A main obstacle to indoor navigation is maintaining a robust navigation solution (i.e. air vehicle position and attitude estimates) given the inadequate access to satellite positioning information. A MEMS (micro-electro-mechanical system) based inertial navigation system provides a small, power efficient means of maintaining a vehicle navigation solution; however, unmitigated error propagation from relatively noisy MEMS sensors results in the loss of a usable navigation solution over a short period of time. Several navigation systems use camera imagery to diminish error propagation by measuring the direction to features in the environment. Changes in feature direction provide information regarding direction for vehicle movement, but not the scale of movement. Movement scale information is contained in the depth to the features.;Depth-from-defocus is a classic technique proposed to derive depth from a single image that involves analysis of the blur inherent in a scene with a narrow depth of field. A challenge to this method is distinguishing blurriness caused by the focal blur from blurriness inherent to the observed scene. In 2007, MIT's Computer Science and Artificial Intelligence Laboratory demonstrated replacing the traditional rounded aperture with a coded aperture to produce a complex blur pattern that is more easily distinguished from the scene. A key to measuring depth using a coded aperture then is to correctly match the blur pattern in a region of the scene with a previously determined set of blur patterns for known depths.;As the depth increases from the focal plane of the camera, the observable change in the blur pattern for small changes in depth is generally reduced. Consequently, as the depth of a feature to be measured using a depth-from-defocus technique increases, the measurement performance decreases. However, a Fresnel zone plate aperture produces diffraction patterns that change the shape of the focal blur pattern. When used as an aperture, the Fresnel zone plate produces multiple focal planes in the scene. The interference between the multiple focal planes produce changes in the aperture that can be observed both between the focal planes and beyond the most distant focal plane. The Fresnel zone plate aperture and lens may be designed to change in the focal blur pattern at greater depths, thereby improving measurement performance of the coded aperture system.;This research provides an in-depth study of the Fresnel zone plate used as a coded aperture, and the performance improvement obtained by augmenting a single camera vision aided inertial navigation system with a Fresnel zone plate coded aperture. Design and analysis of a generalized coded aperture is presented and demonstrated, and special considerations for the Fresnel zone plate are given. Also techniques to determine a continuous depth measurement from a coded image are presented and evaluated through measurement. Finally the measurement results from different aperture configurations are statistically modeled and compared with a simulated vision aided navigation environment to predict the change in performance of a vision aided inertial navigation system when augmented with a coded aperture.
Keywords/Search Tags:Navigation, Coded aperture, Fresnel zone plate, Environment, Blur pattern, Performance, Focal, Depth
Related items