Font Size: a A A

The Application Of Greedy Algorithms In Sparse Learning

Posted on:2017-06-17Degree:DoctorType:Dissertation
Country:ChinaCandidate:C ChenFull Text:PDF
GTID:1318330512463394Subject:Basic mathematics
Abstract/Summary:PDF Full Text Request
Sparse representation is the most representative method of linear representation methods,which has been applied in a wide range of fields,such as signal processing,image processing,machine learning,computer vision and so on.These methods can be divided into different types from different points of view.For example,these methods can be divided into five groups according to the norm used in sparse constraints:1)l0 norm minimization;2)lp norm minimization(0<p<1);3)l1 norm minimization;4)l2,1 norm minimization;5)l2 norm minimization.These methods can also be divided into four types according to the feasibility of the algorithm:1)the greedy strategy ap-proximation;2)constrained optimization;3)proximity algorithm-based optimization;4)homotopy algorithm-based sparse representation.The sparse representation method based on l0 norm can produce the sparse solu-tion,but solving the corresponding optimization problem is a NP hard problem,the greedy strategy approximation is proposed just for the sake of solving this problem.Compared with other regularization algorithms,greedy algorithms are simple and easy to implement,and have huge advantages in the calculation.Greedy algorithms produce a local optimal solution in each iteration to approximate the global optimal solution,the cost of convenience is the relation between the approximation solution and the global optimal solution is not known.Even if they can not apply to all problems,greedy algo-rithms may produce the global optimal solution in many specific problems,therefore,the proofs of the validity of algorithms are very important.Based on the design concept of the greedy algorithm,this paper proposes different types of greedy algorithms for the two kinds of learning problems,and analyzes their statistical performance.1)Analysis of greedy algorithms in fixed design Gauss regressionIn this paper,we propose a penalized empirical relaxed greedy algorithm for fixed design Gaussian regression and analyze the efficiency of the algorithm by providing oracle inequalities.The main steps of the algorithm:first we obtain a sequence of esti-mators by the empirical relaxed greedy algorithm,second we find the optimal estimator by penalizing the iterative number of the empirical relaxed greedy algorithm and the cardinality of the dictionary.P.Massart established the oracle inequalities for the lasso,and proposed a fairly general model selection lemma.We make a little generalization of this lemma and apply it for three cases:the finite dictionary,the infinite countable dictionary,the infinite uncountable dictionary,then we establish the oracle inequalities for our algorithm.We see from those analytic results that the risk bounds are affected by approximation error,the penalty term of iterative numbers and the noise levels.The convergence rate is derived when the target function belongs to the convex hull of the dictionary.2)Analysis of greedy algorithms in sparse constrained optimization problemIn this paper,we consider the sparse constrained optimization problem with gen-eral objective functions(non square loss),and put forward the sparse conjugate gradient algorithm,the conjugate gradient matching pursuit algorithm and its variant algorithm.Yuan and Lu(2009)improved the traditional PRP formula and proposed a new conju-gate gradient direction,on this basis,they proved the global convergence of the conju-gate gradient algorithm and obtained good experimental results.Inspired by this,we establish three types of greedy algorithms which based on the conjugate direction learn-ing to solve a broader sparse constrained optimization problem.We find that when the objective function satisfies the restricted strong convexity and smoothness conditions,the algorithms have linear rates of convergence.3)Analysis of greedy algorithms in complex-valued sparse signal reconstruction problemThis paper focus on the complex-valued sparse reconstruction problems.As the square loss function of complex variables is nowhere differentiable except at the origin,the learning algorithm based on the gradient direction can't be applied in complex case.D.H.Brandwood(1983)proposed a partial complex vector gradient operator which was similar to the gradient operator,and he found the fact that the partial complex vector gradient direction corresponded to the direction of maximum rate of change of the target function.Based on his work,this paper presents the sparse complex gradient algorithm,the complex gradient matching pursuit algorithm and its variant algorithm.Through the analysis,we see that the complex valued square loss function has the properties which are similar to the strongly convex condition and the strongly smooth condition when the observation matrix satisfies D-RIP condition,we can prove the linear learning rates of our algorithms through those properties.
Keywords/Search Tags:Sparse learning, learning rate, greedy algorithm, conjugate gradient matching pursuit, partial complex vector gradient operator
PDF Full Text Request
Related items