There has been significant development of image sensors over the last decade to address issues such as sensitivity, resolution, capture rate, dynamic range, dark current, crosstalk, power consumption, manufacturability and cost. The motivation to decrease pixel size has been either to increase spatial resolution in a given format or to produce smaller formats at a given resolution for potentially lower cost. As pixel size is approaching the limits of conventional optics, improvements in resolution are diminishing. Scaling pixels beyond these limits, however, can provide new imaging capabilities beyond merely attempting to increase spatial resolution.;One consistent limitation in conventional image sensors has been that the sensing area is constrained to a regular array of photosites used to recover an intensity distribution in the focal plane of the imaging system. Although this is the most direct method of image capture, there are both practical and fundamental issues that limit the scalability or performance of these systems. This research explores an alternative approach to imaging, whereby the image sensor is partitioned into an array of apertures that can form images through a distributed process.;A Multi-aperture image sensor is designed with an array of apertures integrated onto a single chip. Each aperture contains its own local sub-array of pixels and image forming optics. By focusing the integrated optics onto the image formed by an objective lens in a region above the multi-aperture imager, the apertures capture overlapping views of the scene. The correlation and redundancy between apertures, along with computation, provide several new capabilities. The most notable feature of this design, which motivates the use of submicron pixels, is that a depth map of the scene may be extracted along with the image. The accuracy in the depth calculations depends on estimating the locations of features within each sub-array of pixels. The positions of features rather than the features themselves may be estimated to resolution higher than a diffraction or aberration limited lens can provide. Furthermore, very high resolution sensors may be made possible because the arrays of pixels may be disjoint. This allows flexibility in readout and correction for manufacturing. Color performance is improved as neighboring pixels all contain the same filter. This design is also useful for close-proximity imaging where the objective lens can be eliminated in order to produce a flat imaging system.;Three types of submicron CCDs are implemented in single-poly 0.11mum CMOS technology to demonstrate the feasibility of multi-aperture imaging systems that produce data from distributed arrays of CCDs integrated across a monolithic substrate. Test structures comprising 16 x 16 pixel Frame-Transfer (FT)-CCDs with 0.5--0.7mum pixels are fabricated under various process conditions to implement devices which operate as surface-channel, buried-channel and pinned phase buried-channel. Ripple charge transfer and single electrode charge confinement are implemented to minimize pixel pitch. |