Font Size: a A A

Parameter Choices For Sparse Regularization With The L1 Norm

Posted on:2024-05-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q R LiuFull Text:PDF
GTID:1528307064974739Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Motivated by the big data nature of practical applications,the study of sparse learning methods has received great attention and become a research hotspot in the field of machine learning.Learning a function from a finite number of observed data is a typical ill-posed problem,which can be treated effectively by the regularization method.Empirical results show that the regularization with the l1 norm can promote sparsity of a regularized solution.The resulting sparse representation for a learned function is essential to ease the computational burden caused by big data.The regularization with the l1 norm is widely used in statistics,machine learning,signal processing and image processing.This paper aims at understanding from the theoretical viewpoint for the regularization problem with the l1norm how the choice of the regularization parameter balances sparsity of the learned solution and its approximation error.We also try to propose the regularization parameter choice strategies.We consider a regularization problem whose objective function consists of a convex fidelity term and a regularization term determined by the l1 norm composed with a linear transform.Firstly,we establish a characterization of the sparsity under the transform matrix of the solution.When the objective function is block-separable or partial information of the regularized solution is available,the resulting characterization can be taken as a regularization parameter choice strategy with which the regularization problem has a solution having a sparsity of a certain level.When the objective function is not block-separable,since the characterization of the parameter depends on the corresponding solution,the characterization can not be used directly as a parameter choice strategy.In this case,we propose an iterative algorithm which simultaneously determines the regularization parameter and its corresponding solution with a prescribed sparsity level.Secondly,we study choices of the regularization parameter so that the regularization term can alleviate the ill-posedness and promote sparsity of the resulting regularized solution.Finally,we apply the proposed parameter choice strategies to various instances.Numerical experiments demonstrate that the proposed algorithm is effective and efficient,and the choices of the regularization parameters can balance the sparsity of the regularized solution and its approximation to the minimizer of the fidelity function.
Keywords/Search Tags:sparse learning, regularization with the l1 norm, parameter choice strategy, characterization, approximation error
PDF Full Text Request
Related items