Font Size: a A A

Capturing Contrasts Below Human Visual Thresholds with Everyday Digital Cameras, Optical Feedback, and Measurement Aggregation

Posted on:2016-12-29Degree:Ph.DType:Dissertation
University:Northwestern UniversityCandidate:Olczak, PaulFull Text:PDF
GTID:1478390017484035Subject:Computer Science
Abstract/Summary:
Can ordinary digital cameras capture contrasts below human visual thresholds?;This dissertation seeks new ways to make accurate light measurements with ordinary digital cameras and displays. It presents 1) an optical feedback inspired calibration algorithms for both cameras and displays, 2) a vignetting calibration method based on image pyramids, and 3) a method that uses my calibrations to detect and magnify small changes in images.;Through repeated measurement, the camera calibration algorithm seeks an accurate photometric calibration, the complete ``numbers-to-light amounts" table, for a digital camera using a digital display. As confirmed by both simulation and experiment, the method finds each light quantization level individually with accuracy of approximately 1/10th of a quantization step and works reliably for low-cost, low-bit-depth digital cameras and displays. Users aim the defocused camera at the display in a dark room, and let my camera calibration algorithm compute, display and photograph an adaptive series of 2-color dither test patterns. Dithering enables the uncalibrated display to emit finely-controlled light amounts in precise steps, and camera shutter-time adjustments let us identify factor-of-two changes in displayed light.;By assessing histograms from thousands of these automatically collected photos, the method estimates the quantization boundary between sequential pixel values q-1 and q in exposure (shutter time * normalized display luminance). The method reveals shortcomings in several, widely-applied assumptions at the heart of earlier calibration methods (e.g. step-by-step, nonuniform, nonlinear RAW response; non-radial vignetting; imperfections in the analog to digital converter) and I show how to measure and correct for them.;My camera vignetting calibration approach repeatedly photographs a uniform light source to find the average light attenuation at each camera sensor pixel location. I show that I can reduce vignetting modeling error by over 97% by creating a `vignetting pyramid' model from neighborhood averages of this attenuation data instead of fitting the data to a radially applied polynomial function.;Finally, I present a technique I call `Change Microscopy', that can detect changes of contrast in a scene that are smaller than human visual thresholds. In the technique, I take many `before' and `after' photographs of a scene and track a pixel-value histogram at each pixel location. For each histogram in the `before' and `after' images, I estimate the input light measured by using a camera correction approach that removes the measurable errors found by my camera calibration algorithm. I then show that from an image difference of the `before' and `after' light estimates, that I can accurately detect changes in the scene 1/7th the intensity of the quantization steps of the camera used to capture it.
Keywords/Search Tags:Camera, Human visual thresholds, Light, Quantization, Changes
Related items