Font Size: a A A

Research On Methods Of Image Quality Assessment Based On Natural Scene Statistics

Posted on:2015-03-23Degree:DoctorType:Dissertation
Country:ChinaCandidate:D W YangFull Text:PDF
GTID:1268330431470438Subject:Earth Exploration and Information Technology
Abstract/Summary:PDF Full Text Request
The early years of the21st century have witnessed a tremendous growth in the use of digital images as a means for representing and communicating information processed. Nevertheless, images are subject to distortions during acquisition, compression, transmission, processing, and reproduction. To maintain, control, and enhance the quality of images, it is important for image acquisition, management, communication, and processing systems to be able to identify and quantify image quality degradations. The development of effective image quality assessment systems is a necessary goal for this purpose.The method of quantify image quality can be through subjective and objective evaluation. Since human beings are the ultimate receivers in most image-processing applications, the most reliable way of assessing the quality of an image is by subjective evaluation. Indeed, the mean opinion score (MOS), a subjective quality measure requiring the services of a number of human observers, has been long regarded as the best method of image quality measurement. However, the MOS method is time-consuming, and it is usually too inconvenient to be useful in real-world applications. Moreover, this method can easily be affected by the human cognitive and psychological factors, it have poor repeatability and stability.The goal of objective image quality assessment research is to design computational models that can predict perceived image quality accurately and automatically. Clearly, the successful development of such objective image quality measures has great potential in a wide range of application environments. First, they can be used to monitor image quality in quality control systems. Second, they can be employed to benchmark image-processing systems and algorithms. Third, they can be embedded into image-processing and transmission systems to optimize the systems and the parameter settings.According to the availability of a reference image, there is a general agreement that objective quality metrics can be divided into three categories:full-reference (FR), no-reference (NR), and reduced-reference (RR) methods.(1) Researchers in the field of image quality assessment (IQA) have attempted to measure quality using the so-called full-reference (FR) frame work. This framework is a consequence of our limited understanding of human perceptions of quality. It involves the following hypothesis: The quality of an image could be evaluated by comparing it against a reference signal of perfect quality. A measure of the similarity between the reference image and the image being evaluated could be calibrated to serve as a measure of perceptual quality. A full-reference algorithm therefore computes the similarity between the image or video whose quality is to be evaluated (called the test signal) and the associated reference signal. In present, FR IQA have three basic approach:(a) The Mean Squared Error. One obvious way of measuring this similarity is to compute an error signal by subtracting the test signal from the reference, and then computing the average energy of the error signal. The simplest, and the most widely used, FR QA method is the mean-squared-error (MSE) and peak signal to noise ratio (PSNR),which calculate pixel-wise distances,(b) Structural Similarity Approaches. This method claims that the purpose of the HVS is to extract cognitive information from images, which almost exclusively comes from the structure of objects in images. Thus, one should quantify structural distortion to evaluate image perceptual quality. The most typical representatives of this kind of approaches are Structural SIMilarity (SSIM) and Feature SIMilarity (FSIM).(c) Information Fidelity. This method approach the quality assessment problem from a novel perspective:an information-theoretic perspective based on natural scene statistics (NSS). In this approach, the quality assessment problem is viewed as an information fidelity problem. FR methods usually provide the most precise evaluation results in comparing with NR and RR IQA.(2) Reduced-Reference Image Quality Assessment. To provide a compromise between FR and NR, RR methods, which become popular in recent years, have been designed for IQA by employing partial information of the corresponding reference. Reduced-reference (RR) image quality assessment is a relatively new research topic, as compared with the full-reference (FR) and no-reference (NR) quality assessment paradigms mentioned in the preceding chapters. RR quality measures were first conceptualized only in the late1990s, mainly as a response to very specific and pragmatic needs developing in the multimedia communication industry. In such networks, the original image data are generally not accessible at the receiver side and thus an FR quality assessment method is not applicable. On the other hand, NR quality assessment remains a daunting task. RR quality measures provide a useful solution that delivers a compromise between FR and NR methods. At the sender side, a feature extraction process is applied to the original image, and then the extracted features are transmitted to the receiver as side information through an ancillary channel. In the final image quality measurement stage, the features that were extracted from both the reference and distorted images are employed to yield a scalar quality score that describes the quality of the distorted image.Three different but related types of approaches have been employed in existing RR IQA methods:The first type of approaches are based on modeling image distortions and are mostly developed for specific application environments. The limitation of such metrics is in their generalization capability. Generally, it is inappropriate to apply these metrics beyond the scenarios they are designed for. The second type of approaches are based on modeling the human visual system, where perceptual features motivated from computational models of low level vision were extracted to provide a reduced description of the image. However, HVS-based metrics are plagued by the complexity of HVS models, which may render practical implementations difficult. The third type of approaches is based on modeling natural image statistics. The basic assumption behind these approaches is that most real-world image distortions disturb image statistics and make the distorted image "unnatural." The unnaturalness measured based on models of natural image statistics can then be used to quantify image quality degradation.(3) No-Reference Image Quality Assessment. No-reference (NR) image quality assessment must evaluate the quality of any given real-world image, without referring to an "original" high-quality image. In many practical applications, an image quality assessment system does not have access to the reference images. Therefore, it is desirable to develop measurement approaches that can evaluate image quality blindly. Presently, NR-IQA algorithms generally follow one of two trends:1) distortion-specific approaches and hence to some degree, application-specific. They are capable of performing blind IQA only if the distortion that afflicts the image is known beforehand, e.g., blur or noise or blockiness or ringing and so on. The application-specific NR methods is truly reference-free, since, nothing needs to be assumed about the "original image". However, this type of NR image quality assessment method is one that is specifically designed to handle a specific artifact type, and that is unlikely to be able to handle other types of distortions.2) general purpose NR image quality assessment. General-purpose NR IQA does not assume a specific distortion type. These methods are intended to be flexible enough to be used in a variety of different applications. The basic assumption behind these methods is that there are models of high-quality "reference signals" in our brains, and a learned ability to use these models to assess picture quality. Our neural plasticity extends not only over the eons of evolution (wherein the visual systems are exposed to a large variety of natural scenes), but also over shorter spans within our lifetimes. Short-term plasticity forms the basis for our abilities of visual recognition and visual memory. General-purpose NR IQA is only in the beginning stages. Only a small amount of work has been done on the extremely difficult problem of designing NR QA algorithms that are not fixed to a single type or source of distortion.This dissertation focuses on the research about based-NSS image quality assessment. As we shall see, these so-called natural scene statistic (NSS) models are highly attractive in a number of ways:they reliably capture low-level statistical properties of images (hence are very general and flexible models); they can be used to measure the destruction of "naturalness" introduced by distortions (enabling effective distortion models); and they accurately describe the statistics to which the visual apparatus has adapted and evolved over the millennia (and so, are regarded as direct duals of low-level perceptual models). The visual apparatus is highly adapted to the natural environment, and has evolved to most efficiently extract visual information from it. In essence, NSS-based IQA algorithms seek to capture statistical regularities of natural images and to quantify how these regularities are modified or lost when distortions occurThe research about based-NSS image quality assessment mainly has three aspects.NSS model is toward FR image quality assessment. The quality assessment problem is viewed as an information fidelity problem in which a source of natural images tries to communicate to a receiver (the human brain) through a channel that imposes limitations (by introducing distortions) on how much information could flow through it. The reference image is modeled by the NSS model and the distortion model is to describe how the statistics of an image are disturbed by a generic distortion operator, which is a signal attenuation and additive noise model. The mutual information between the input and the output of the channel is quantified by means of information theory and is claimed that this mutual information, which quantifies the information about the reference image (the source) that could ideally be extracted by the receiver (the brain) from the test image (the output of the channel), is one aspect of statistical information fidelity that should relate well with visual quality.(2) NSS models are toward RR image quality assessment. Clearly, RR features are crucial for RR IQA, which should more efficiently summarize image information content, be more sensitive to image distortions, and have stronger perceptual relevance. NSS modeling provides a powerful means to approach these goals. NSS model can provide prior information before an image is distorted, and can effectively summarize the image content. NSS model has its effects on two aspects:one is extracting the NR features from an image, which are unrelated to image content; two is describing the naturalness of an image and making it possible to quantify the image distortion. NSS are widely employed as the basic gradient in the exiting RR IQAs.(3) NSS models are toward RR image quality assessment. Motivated by recent developments in NSS-based image modeling and NSS-based RR algorithm design, we have developed new NSS model-based approaches to the NR IQA problem. NSS models seek to capture the natural statistical behavior of images, and such prior models of image statistics enables the use of a rich groundwork of Bayesian statistical methods, and are rooted in the widely accepted view of biological perceptual systems in computational neuroscience and psychophysics.The major work of this dissertation can be summarized as the following:1. A FR image quality assessment algorithm is proposed, which is based on property of HVS, such as multi-channel structure, masking effect and band-pass property of contrast sensitivity. In this assessment algorithm, contourlet transform, which decompose the original image to subband images, is employed to simulated the muli-channel behavior of HVS, a model of visual masking is employed to evaluate the visual error between the reference image and the distorted image in each decomposed subband, and finally, the image quality evaluation results are calculated from the visual error weighted factor which is the comparing results of CSF in various subband. The experimental result shows that the proposed algorithm is have good stability and it correlates well with the judgment of human observers.2. An RR image quality assessment method based on Roberts cross derivative statistic of natural image was proposed. The Roberts cross derivative is utilized in extraction of the image’s geometric features, which are applied in visual prediction of human beings. We observe that the marginal distributions of Robert cross derivatives of natural images can be well-fitted with a2-parameter generalized Laplace distribution model and are changed in different ways for different degrees of distortion of all kinds, apply RAD is applied to measure statistical errors between probability distributions to quantify the distortion degree of distorted image.3. A reduced-reference image quality assessment using moment method on wavelet-transform-domain or recognized-discrete-transform was proposed. We firstly apply a discrete wavelet transform (DWT) or recognized discrete cosine transform (RDCT) to decompose the reference and distorted images into some subbands. the first and secondary moments for individual subbands are evaluated numerically. The RR quality metric that we introduced is defined as the average value of the differences of moments over these subbands. The experimental results upon LIVE and TID image databases indicate that the proposed metric has a good predictive performance. 4. A reduced-reference image quality assessment based on divisive normalization transform (DNT) is proposed. A divisive normalization transform (DNT) is built upon a linear image decomposition, followed by a divisive normalization stage. The Gaussian scale mixtures (GSM) model provide the normalization factor. The experiment show that the DNT transform subbands of natural images can be well-fitted with Gaussian distribution. The statistics related to the parameters of Gaussian model are employed to reduce-reference image quality assessment. The algorithms introduced need few data rate of RR feature.5. A NR IQA algorithm based on gray level co-occurrence matrix (GLCM) was proposed. The GLCM are calculated from a image whose coefficients are locally. Four types of statistical parameters:angular second moment (ASM), contrast (CON), correlation (COR) and homogeneity (HOM) generated by GLCM form four new types of NR features to represent the naturalness of image, The experimental results upon LIVE image databases indicate that the proposed algorithm outperform PSNR.6. The methods of image quality assessment based on NSS are used to evaluate the quality of remote sensing image. The simulation experiments are set up, an the validation of our methods are confirmed.The main contributions of this dissertation are:1. The contourlet transformation is of multi-scale, directional anisotropy, which is according to muli-channel behavior of HVS. So the contourlet transformation is employed by us to extract the visual information from an image. Moreover, the masking effect and band-pass property of contrast sensitivity are exploited to comput the visual errors between the reference and distorted image, which lead to a new FR IQA method based on the property of HVS.2. We observe that the marginal distributions of Robert cross derivatives of natural images can confirm to a generalized Laplace distribution model. The model parameters are applied to summarize the naturalness of the reference image, is adopted as standardized measures of the distortion degree of image.3. The basic assumption behind the standard method for RR IQA is that the marginal distribution of wavelet coefficients of a natural image follows to GGD model. And model parameters are employed to summarize the naturalness of reference image. However, for accurately computing the model parameters of GGD, which are used as RR features about reference image, difficult calculations are required. The introduced metric is suitable for implementation and has relatively low complexity.4. The GLCM is employed to further exploit the identical nature of the distributions between adjacent coefficients. Four types of statistical parameters:angular second moment (ASM), contrast (CON), correlation (COR) and homogeneity (HOM) generated by GLCM form four new types of NR features, which make IQA algorithm have a better correlation with human perception.
Keywords/Search Tags:Image quality assessment, Natural Scene Statistics, Multiscale GeometricAnalysis, Wavelet transform, Moment method
PDF Full Text Request
Related items