| The rapid development of modern science and technology leads to the explosive growth of data,which makes multi-dimensional data(tensor data)widely exist.The traditional method is to convert tensor data into vectors or matrices,which will lose the inherent geometric and statistical properties of tensor data.On the other hand,studies based on tensors show that tensor models can make full use of multilinear structures to better describe the information.This makes tensors have a wide range of applications in computer vision,pattern recognition,and signal processing.However,due to sensor failure,transmission loss,and other reasons,tensor data often face problems such as missing and noise pollution.Therefore,this paper focus on how to recover clean tensor data from degraded data.First,we consider the low-rank and sparse tensor completion problem for recover-ing a low-rank tensor from limited samples and sparsely corrupted observations,espe-cially by impulse noise.A convex relaxation of this problem is to minimize a weighted combination of the tubal nuclear norm and the l1-norm sparsity term.However,thel1-norm may yield biased estimators and can not get the high-precision solutions.To overcome this disadvantage,we propose a nonconvex low-rank and sparse tensor com-pletion model,which minimizes a weighted combination of the tubal nuclear norm,the nonconvex sparsity term,and a quadratic data fitting term,where the nonconvex term has a difference of convex functions(DC)structure,such as the Minimax Concave Penalty function(MCP)and the Smoothly Clipped Absolute Deviation function(SCAD).Fur-ther,we present a Gauss-Seidel difference of convex functions algorithm(GS-DCA)to quickly and effectively solve the resulting optimization model by using a linearization technique.By using the Kurdyka-(?)ojasiewicz(KL)property,we prove that the sequence generated by GS-DCA converges to the critical point of the proposed model.Based on the GS-DCA,we propose an extrapolation technique to improve the recovery perfor-mance of the GS-DCA.Finally,numerical experiments on color images,hyperspectral images,Magnetic Resonance imaging(MRI)and videos verify the proposed method and algorithm are superior to other compared methods in terms of computing speed and effectiveness.Second,we focus on the tensor completion problem for third-order tensors which recovers a low-rank tensor from partial observations corrupted by impulse noise.Actual-ly,the tubal nuclear norm is the l1-norm of all singular vectors of all frontal slides of the tensor in the Fourier domain.Hence,the tubal nuclear norm results in a biased estimator as well as the l1-norm does.To overcome this shortcoming,we propose a nonconvex relaxation model with a DC structure,which is composed of a nonconvex low-rank term and a nonconvex sparse term,with linear constraints and bound constraints.In addition,we prove the equivalence of global solutions between the low-rank and sparse tensor completion problem and our proposed nonconvex model and theoretically show that the nonconvex relaxation method is closer to the original problem than the convex relaxation method.Then,a proximal majorization-minimization(PMM)algorithm is developed to solve the proposed nonconvex model by using a linearization technique and the glob-al convergence of the proposed algorithm is proved by using the KL property.Finally,Extensive numerical experiments including color images and multispectral images show the high efficiency of the proposed model.This also shows that the accuracy of the so-lution of a model with both low-rank and sparse terms as nonconvex relaxation is higher than that of a convex model or a model with only one term as nonconvex relaxation.Finally,we continue to study the error bounds of the low-rank and sparse tensor completion problems,that is,study the error bounds between the solution of each sub-problem generated by the PMM algorithm and the ground truth.In the existing research on low-rank and sparse tensor completion problems,most of the works are mainly fo-cused on the algorithm for solving the model and its convergence,and the algorithm generally converges to the critical point of the model.However,there is little research on the error bounds between the solution and the ground truth,especially the case when the model contains both low rank and sparse.Therefore,for the low-rank and sparse ten-sor completion problems,we establish a nonasymptotic recovery error bound.Then,we prove that when the given initial estimator does not deviate too much from the ground truth,the error bound obtained by our model is lower than that of the convex optimization model.The two linear terms of the PMM algorithm can be seen as the rank-correction term and sparsity-correction term constructed on the initial estimator.Our results of re-covery error bounds also suggest a criterion for constructing a suitable rank-correction function and a sparsity-correction function.Finally,it can be seen from numerical exper-iments that as the number of outer iterations increases,the upper error bounds continue to decrease. |