In recent years,many problems in the field of machine learning and engineering have a tendency to move closer to the direction of optimization,more and more machine learn-ing problems can be summarized as optimization problems.this paper mainly studies the the-oretical properties and algorithms of sparse optimization.Because the Huber function com-bines the advantages of Least absolute deviation and Least squares,which is smooth and ro-bust.Therefore,the loss function in our model use the Huber function which is smooth and convex,while the penalty term is defined as a cardinal function.This paper uses a minimax con-cave continuous penalty to continuously approximate the0norm to get the relaxation model of the Huber problem,we investigate the solution of the relaxation problem and the original problem,and discuss the relationship and properties of the solution.First,we discuss the global optimal solution of the original0model and the relaxation model based on the Huber loss function in one-dimensional case,and analyze the relationship between the global optimal solution.We also define the stability point and the relationship between the stability point and the solution for the MCP relaxation problem.We obtain the lower bound theoretical properties,analyze the relationship between the MCP relaxation prob-lem and the global optimal solution of the original0sparse regular problem,and prove that under certain conditions the two models have the same global optimal solution and optimal value.The second part is the algorithm part.Which uses the Accelerated proximal gradien-t method.This paper analyzes the closed expression of the adjacent operator of the MCP regular function,and design the algorithm framework.Finally,the convergence of the algorith-m is proved.The last part is the experimental part.We carried out the regression fitting experiment.By generating Huber regression fitting and comparing with Least square regression,it is proved that Huber loss function has good robustness in the case of outliers.In the sparse signal re-covery experiment,by comparing the relative error of Huber loss function and Least squares loss function before and after adding noise,we get that the recovery effect of Huber function for noised data is better than that of Least squares,while for the randomly generated signal,the signals can be accurately recovered with its dimension varying from 100 to 800 and the sparsity varying from 50 to 60. |