Font Size: a A A

Application Of Statistical Learning To Color Calibration And Color Constancy

Posted on:2009-11-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:E R DingFull Text:PDF
GTID:1118360242478266Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Color management is a technique of solving the color inconsistency among various imaging device for the eventual purpose of WYSIWYG (What You See Is What You Get), which has found its wide use in printing, textile, multimedia and so forth. Color calibration, the core of color management, takes the main responsibility in compensating the imaging devices for the color difference owing to the inner nonlinearity, which is usually combined with color transformation in color management systems and has been a research field in spotlight home and abroad due to the fact that calibration effect is directly linked to the preciseness of color reproduction. In recent years, color constancy has become a key technique for future color management and actual demands, which considers the influence of the viewing environment like illuminant. The purpose of color constancy is to retain the invariant description for color under different illuminations, which is vital to the application of object recognition and content-based image retrieval in computer vision.As is known, color mapping is often involved in color calibration and color constancy by virtue of color samples used for capturing color characteristics. Nevertheless techniques in statistical learning, the powerful techniques to describe the characteristics and relationship of samples, haven't been applied to these two fields to a satisfying extent. The dissertation herein focuses its efforts on the application of statistical learning to color calibration and color constancy. In a statistical sense, color calibration falls into four categoies, namely, tri-linear interpolation, regression in neighborhood, sparse Bayesian learning and computational intelligence, which is also consistent with the calibration mechanism. In each category, the technique bottlenecks and the corresponding solutions are discussed with their emphases on the precision and speed. In color constancy, the whole procedure of completing it is discussed with a view to the worldwide research status quo as well as the ongoing project deployment, which leads to an algorithm of estimating the illumination chromaticity and an algorithm of supervised color constancy.In the category of tri-linear interpolation, the problem of locating geometrical body for current popular interpolation is tackled and a novel algorithm of interpolation with high precision is proposed. Firstly, two acceleration algorithms for tetrahedral interpolation are introduced. The first algorithm of local search based on history utilizes the location information of the previous search to look up the current data point while the second algorithm of fast location using auxiliary table includes two steps of rough location and precise location. Both algorithms can improve the speed of producing 3D LUT data. Moreover, a re- acceleration strategy is presented using the prior knowledge of rendering intents for gamut mapping, which proves to be effective. Last, a linear interpolation via improved fuzzy entropy maximization is proposed. The algorithm defines a new calibration range and adopts the fuzzy entropy to determine the interpolation weights. The advantages of doing so are needlessness of locating geometrical body and higher precision than the previous algorithms of tri-linear interpolation.In the category of regression in neighborhood, the concept of calibration in neighborhood is given and two algorithms of different mechanisms are presented, aiming at the defects of current regression tecniques. The neighborhood carries the idea of subspace further and overcomes its defects. The first algorithm is based on structural risk minimization (SRM) and total least squares (TLS), in which SRM approximates the real risk and reduces the model complexity while TLS takes into account the noise in input/output data. The second algorithm is based on kernel partial least squares and boosting. Kernels enrich the calibration information, partial least squares extracts the principle components and boosting improves the precision further. In the end, fast location and the range of the neighborhood are dicussed.In the category of sparse Bayesian learning, a sparse kernel tool based on Bayes'rule, relevance vector machine (RVM), is adopted and many efficient measures for improvement are proposed. These algorithms, at first, integrate multiple kernels to provide a set of complete or over-complete bases, then apply locality preserving projections to reduce the column dimension of multiple kernel input matrix and finally adopts the technique of relevance vector pre-extraction or distributed architecture to further lessen the training time. The construction of complete bases relies on the scaling kernel and the wavelet kernel while the over-complete bases on existent kernels. The pre-extraction is achieved by stratification sampling or clustering. The proposed algorithms turn out to be superior to SVM and RVM in precision and shorter than RVM in training time.In the category of computational intelligence, the corresponding measures are taken to tackle the problem of inner structure configuration for the application of fuzzy logic and neural network to pragmatic calibration, and the application possibility of genetic algorithm is also given. At first, an algorithm based on KPCA and ANFIS is proposed. ANFIS derives the rules of If-then automatically while KPCA acts as a preprocessing step, making the algorithm advantageous. A calibration model based on neural network ensemble is then presented. The model avoids the problem of determining inner structure of a single neural network and ensembles several of them to improve the generalization, resulting in the improvement in precision against a single neural network and the ensemble of bagging. Last, a simple boosting model based on genetic algorithm is proposed. The selection operator chooses the difficult samples to comprise the next training sample set. An example based on ANFIS verifies the validity of the proposed model.In the realization of color constancy, an algorithm of estimating illuminant chromaticity using adaptive reduced relevance vector machine and an algorithm of supervised color constancy based on thin plate spline and LAD regression are proposed, which is on the basis of current research and pragmatic use. The former integrates the mixture of kernels adaptively and applies the improved LPP to reduce the training time. To estimate the illuminant chromaticity, the fuzzy clusters of chromaticity histogram and the corresponding illumination are used to train the algorithm. By putting a supervised color chart, the latter maps the illumination data using TPS and applies LAD regression to the reduced mapped data to capture the transformation among illuminations. Experiments of real images verify the superiority of these two algorithms.
Keywords/Search Tags:Color Management, Color Calibration, Color Constancy, Statistical Learning, Trilinear Interpolation, Multiple Regression, Sparse Bayesian Learning, Computational Intelligence
PDF Full Text Request
Related items