| Sparse optimization problems arise from a remarkable variety of applications including machine learning,signal and image processing,data science,medical imaging,compressed sensing,computer vision,statistical regression,computational biology,and so on.Sparse optimization is a hot research topic in the cross field of applied mathematics,electrical engineering,operations research,computer science and information science,and has attracted much cross-disciplinary attention.In recent years,the mathematical theory and algorithms have developed greatly.In this dissertation,we mainly focus on the numerical methods for solving the sparse least squares regression problem with probabilistic simplex constraint in various applications such as computational biology and hyperspectral unmixing and the sparse stochastic matrix factorization in document clustering.In Chapter 2,we propose a a modified orthogonal matching pursuit for solving a special sparse least squares regression problem with probabilistic simplex constraint,the construction of a sparse probabilistic Boolean network.Our method chooses an index via a new greedy strategy and add the index to a target support and then update the possible constituent Boolean networks and corresponding selection probabilities such that the transition probability matrix of the constructed probabilistic Boolean network is closest to the prescribed transition probability matrix in the least squares sense.Finally,we find only a few major constituent Boolean networks with associated selection probabilities.We give some necessary conditions and sufficient conditions to guarantee our method can recover a sparse probabilistic Boolean network.Numerical examples show the proposed method can effectively construct a sparse probabilistic Boolean network.In Chapter 3,we study the general sparse least squares regression problem with probabilistic simplex constraint.To find a sparse solution,we reformulate the least squares regression problem as a nonconvex and nonsmooth l1 regularized minimization problem over the unit sphere and present a geometric proximal gradient method for solving the regularized problem.We provide explicit expression of the global solutions to the involved subproblems of our method.We also give the global convergence analysis.Numerical experiments,including numerical examples in Lasso problems and hyperspectral applications,show the effectiveness of the propose method.In Chapter 4,we consider a special nonnegative matrix factorization,the sparse stochastic matrix factorization.In a classic regularization method,it is often difficult to choose an appropriate regularization parameter such that a good tradeoff between the residual term and the regularization term can be reached.We directly apply the l0 constraint to measure the sparseness in the sparse stochastic matrix factorization.Based on the given factorization rank and the prescribed sparsity level,the considered sparse stochastic matrix factorization is reformulated as an unconstrained nonvonvex-nonsmooth minimization problem and a a row-wise update algorithm is introduced.A significant feature of our method is to combine the alternating minimization method with the proximal alternating linearized minimization method to update the factorization factors.We establish the global convergence of the proposed method and show that the generated sequence converges to a special critical point of the cost function,which is a global minimizer over one matrix factor as a whole and is nearly a global minimizer over each row vector of the other matrix factor.Numerical experiments on both synthetic and real data sets are given to demonstrate our proposed algorithm is much effective over the proximal alternating linearized minimization. |