Font Size: a A A

Overlapping group sparsity: Algorithms and applications

Posted on:2015-02-06Degree:Ph.DType:Thesis
University:Polytechnic Institute of New York UniversityCandidate:Chen, Po-YuFull Text:PDF
GTID:2478390017496098Subject:Engineering
Abstract/Summary:
Recently, research on sparse signal representation has played an important role in signal processing and statistical mathematics. The term `sparsity' stands for an efficient representation of a given signal in a certain domain such that relatively few coefficients have large amplitudes. Several applications, like denoising, deblurring, and separation problems, can exploit sparsity through the use of L0 pseudo-norm or L1 norm in the regularization term of a cost function. However, both L0 pseudo-norm or L 1 norm ignore the inter-dependency between neighboring coefficients. To overcome this, this thesis considers sparse coefficients possessing a group (cluster) structure. In the rest part of this thesis we use convex group sparsity. We also model the groups fully overlapping so that subsequently the proposed estimator is translation-invariant. We apply this optimization approach with overlapping group sparsity penalty to the denoising problem. We propose an algorithm based on a majorization-minimization (MM) framework. We call the algorithm overlapping group shrinkage (OGS). We develop a method for setting the regularization parameter according to the noise level, which can be pre-calculated o-line. We apply the algorithm to the problem of a speech enhancement. OGS exhibits superior results in terms of both SNR and the perceptual quality, compared to traditional and state-of-the-art methods.;Furthermore, we develop an enhancement of the OGS algorithm by utilizing a nonconvex penalty function. We consider non-smooth and non-convex functions in the group sparsity framework, in which one parameter can be adjusted to control the degree of nonconvexity. Specifically, we restrict the range of this parameter so as to ensure that the overall cost function is convex even though the penalty term non-convex. In this way, several benefits of convex optimization formulations can be maintained (uniqueness and robustness). The computational complexity of the algorithm, also based on MM, is almost the same as that of OGS. The improvement made by the non-convex penalty function is also demonstrated by a speech enhancement problem.;We also address a decomposition problem, where the observed signal comprises a mixture of two components: a (group) sparse one and a low-pass one. We propose a method, formulated as an optimization problem, to un-mix the components. Our algorithm based on the MM framework, denoted LPF/OGS, calls for the solution of a banded system of linear equations, which can be solved eciently. Furthermore, non-convex penalty functions are also proposed to further enhance the group sparse component. In the experimental result we show that LPF/OGS successfully extracted the artifacts, modelled by a group sparse component, for a near-infrared spectroscopy (NIRS) signal.;Last, we describe an extension to total variation denoising wherein it is assumed the first-order difference function of the unknown signal is not only sparse, but also that large values of the first order difference function do not generally occur in isolation. This approach is designed to alleviate the staircase artifact often arising in total variation based solutions. For one-dimensional signals, we propose a convex cost function, and an iterative algorithm is derived using a MM framework. The algorithm is both fast converging and computationally efficient due to the use of fast solvers for banded systems. For two-dimensional signals, we optimize the cost function by using the alternative direction method of multiplier (ADMM) combined with a MM framework. We demonstrate improvement in terms of PSNR compared the proposed scheme to the total-variation denoising (TVD) scheme.
Keywords/Search Tags:Sparsity, MM framework, Algorithm, Sparse, Signal, Term, Overlapping, Cost function
Related items