| The alternating direction method of multipliers(ADM or ADMM),stands out from numerous methods at solving convex optimization problems with separable objective functions.Apart from the properties of global convergence,the ADMM also has very fast convergence rate,which usually can reach to linear convergence or sub-linear convergence.In reality,as a kind of decomposition method,the main thought of the ADMM is taking advantages of the separabiliity of objective function to decompose the large-scale original problem into small-scale and simplified sub-problems,and then solving the simplified subproblems by minimization over x and y alternately.With the high-efficiency and conciseness,the ADMM reduces the difficulty of solving the original problem.As a result,the ADMM is usually applied to solve large-scale optimization problem.As elementary optimization method,the ADMM has wide applications in the traditional optimization problems such as convex optimization problem,variational inequality problem,partial differential equation.For the past few years,as the era of big data is coming,the ADMM has more wide applications in the areas such as machine learning,image processing,support vector machine,sparse representation.The main purpose of this article is to study the generalized ADMM,which is different from the classic ADMM: first,the generalized ADMM adds the quadratic penalty term to the y-and x-sub-problems and allow the symmetric matrix P to be indefinite,second,the generalized ADMM do not fix ?(28)1 like in the classic ADMM since ? can also affect the convergent rate significantly.In this paper,we apply the generalized ADMM to solve the elastic net(augmented L1)model,and then analyze the influence of different P,Q on the convergent rate of the algorithm.Numerical demonstration also shows that the lagrangian penalty parameter ? has a significant effect on the convergent rate: when ? is less than 5,the iterations of generalized ADMM is usually greater than 500,when ? in the interval[50,100],our algorithm will converge after dozens of iterations.And at the same time,the Numerical demonstration indicates that the generalized ADMM is especially efficient at solving the large-scale and low rank optimization model,which requires only several iterations to obtain the optimal solution. |