Font Size: a A A

Variational Method In Image Restoration

Posted on:2008-05-22Degree:MasterType:Thesis
Country:ChinaCandidate:Y CaoFull Text:PDF
GTID:2178360212996214Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
It is surprising when we realize just how much we are surrounded by images. Images allow us not only to perform complex tasks on a daily basis, but also to communicate, transmit information, and represent and understand the world around us. However, because of photograph device and image transportation, the images we obtained are usually uncleaned and contain a lot of noises. In order to obtain clearer images or be suitable to subsequent process, we had to do some pretreatment to the images. Indeed, image processing is the process to improve the disappointed images. There are improvements to the image information so that we can analize image more easily, solving the machine sense problems and some other techniques such as hardware designing and making to complete the above tasks. Abstractly, image processing can be considered as an input-output system, the input data is the grey-level of the pixel and the output is the image itself, see R. C. Gonzalez, R. E. Woods [56]. The image processing has a long history from 1960s. But at the first twenty years, image processing, traditionally an computing and engineering field, has not attracted the attention of mathematicians. However, from the point of view of vision and cognitive science, image processing is a basic tool used to reconstruct the relative order, geometry, topology, patterns, and dynamics of the three-dimensional world from two-dimensional images. Therefore, it can not be merely a historical coincidence that mathematics must meet image processing in this era of digital technology. The role of mathematics is determined also by the broad range of applications of image processing in contemporary science and technology. These applications include astronomy and aerospace exploration, medical imaging, molecular imaging, computer graphics, human and machine vision, telecommunication, autopiloting, surveillance video, and biometric security identification (such as fingerprints and face identification). All these highly diversified disciplines have made it necessary to develop common mathematical foundations and frameworks for image analysis and processing. Mathematics at all levels must be introduced to address the crucial criteria demanded by this new era-genericity, well-posedness, accuracy, and computational efficiency, see [48].From the view of abstract degree and study method, there are three partition tasks in image processing: image processing, image analyze, image understanding. Here, image processing is a pre-process, which some times is called low-level image processing. An important problem in image processing is the reconstruction of an original image f(x,y) describing a real scene from an observed image g(x,y). The transformation (or degradation) connecting f(x,y) to g(x,y) is in general the result of two phenomena. The first phenomenon is deterministic and is related to the mode of image acquisition (for example, the computation of integral projections in tomography) or to possible defects of the imaging sustem (blur created by a wrong lens adjustement, by a movement,...). The second phenomenon is random: the noise inherent degradation in any signal transmission. The simplest model describing both blur and noise is the linear degradation model:g(x,y) = Rf(x,y)+η(x,y), (x,y)∈Ω, (1)where R is a linear degradation operator,η(x, y) is an additive noise term which we suppose is white and Gaussian.The reconstruction problem of f(x,y) can be identified, in that way, with an inverse problem: find f(x,y), from (1.1). In-general, this problem is ill-posed in the sense of Hadamard, see A. N. Tikhonov, V. Y. Arsenin [4]. The information provided by g(x, y) and the model (1.1) is not sufficient to ensure the existence, uniqueness, and stability, of a solution f(x,y).A number of approaches can be taken to estimate f(x,y). These include spline smoothing, filtering using Fourier and wavelet transforms, and stochastic approach based Bayesian estimation and so on. We know that if we suppose the degradation operator R is a convolution, then we can take Fourier transform of (1) to getThus we haveTo recover f(x,y), we apply the inverse Fourier transform to (3). This procedure is generally very ill-posed. The maximum a posterior criterion in the stochastic approach identifies with a minimization problem in which the energy functional J depends on the image f(x,y) and its gradient.From the view of minimizing the energy functional, people begin to study the variational method. Now, variational method has been extremely successful in a wide variety of restoration problems, and remain one of the most active areas of research in mathematical image processing and computer vision. By now, their scope encompasses not only the fundamental problem of image denoising, but also other restoration tasks such as deblurring, blind deconvo-lution, and inpainting. Variational models exhibit the solution of these problems as minimizers of appropriately chosen functionals. At first, people considered the constrainted minimization probleminf J(f) (4)Most conventional variational models involve a least squares L2 fit. The first attempt along these lines was made by D. L. Phillips in the one-dimensional case and later refined by S. Twomey. The resulting linear system is now easy to solve using modern numerical linear algebra. However, the results are again disappointing, they are liken to be contaminated by Gibb's phenomena (ringing) and smearing near edges. The total variation models were first given by Rudin, S. Osher, E. Fatemi in [33]. In their models the energy functional . Although the total variation models are a class of the variational models, from the view of Euler-Lagrangeequations, we found that they are also PDE-based methods. Indeed, they are the deformation of mean-curvature equations. The main advantage is that the solutions preserve edges very well, reduce oscillations and regularize the geometry of level set without penalizing discontinuities, but it possesses some properties which may be undesirable under some circumstance. First, there are computational difficulties. Other defections are staircasing, loss of construct, loss of geometries and so on. In order to compensate for these defec- tions, people modify the total variation models, such as replace the squared L2 norm in the fidelity term by the L1 norm, introduce higher order derivation into the energy functional.Usually, people consider the unconstrainted problemThe first term in J(f) measures the fidelity to the data, the second is a smoothing term and the parameterαis a positive weighting constant. How to chooseΦ(|▽f|) is an important problem. Using the T and N direction derivative technique, we findφshould satisfyIt is important to choose the parameterα. Generally, an intermediate value ofαis used, resulting in a compromise between the two extreme estimates. In truth, it is unfair simply to regard the resulting estimate merely as a compromise, because the minimizer of J(f) is clearly inefficient in not using the data at all and because, at the other extreme, the estimate is often numerically unstable. Various ways of choosingαare discussed in the case of quadratic regulariza-tion criteria, such as Minimization of Total Predicted Mean Squared Error (TPMSE), SNR Analysis.Using the direct method of calculus of variations, we can prove that(7) have a unique solution in V = {f∈L2(Ω),▽f∈L1{Ω)2}. We have to assume some hypotheses onφi)g∈L(Ω), 0≤g(x, y)≤1, a.e.(x,y)∈Ω;ii)φ∈> C2 is a convex, nondecreasing function from (?)+ to (?)+, andαi > 0,bi≥0,i = 1,2, such thatα1|s| - b1≤φ(s)≤α2|s|-b2,(?)s∈(?);iii) 0<φ"(s)<1,(?)s∈(?).To consider the BV solution of (7), we should remark that sequence bounded in V are also bounded in BV(Ω). Therefore, they are compact for the BV weakly star topology. In this case, it is classical to compute the relaxed energy.There have been numerous numerical algorithms proposed for minimizing the (7) objective. Most of them fall into the three main approaches, namely, direct optimization (see G. Gilboa, J. Darbon, S. Osher, T. F. Chan [24]), solving the associated Euler-Lagrange equations and using the dual variable explicitly in the solution process to overcome some computational difficulties encountered in the primal problem. We know that solveing (16) equivalent solveing the Euler-Lagrange equation. More precisely, consider the image as a function of space and time and seek the steady state of the equationHere, |▽f|ε= (|▽f|22)1/2 is a regularized version of |▽f| to reduce degeneracies in flat region where |▽f|≈0. In order to give up the CFL condition, C. R. Vogel, M. E. Oman [7] introduce the fixed point iterative method. If f is the optimal solution, then it satisfieswhereFixed point iteration can be expressed asviaOver the years, the variation models have been extended to many other image restoration tasks, such as image segmentation, image inpainting, image classification, and has been modified in a variety of ways to improve its performance. We refer the readers to the article T. F. Chan, J. H. Shen, L. Vese [48] and the references therein.
Keywords/Search Tags:Variational
PDF Full Text Request
Related items