Font Size: a A A

Study Of Matrix Decomposition Theory And Algorithm In Image Processing

Posted on:2021-09-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:X L ZhuFull Text:PDF
GTID:1488306050463814Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
In recent years,vast amounts of raw data are generated in exponential growth in networks,engineering,science applications,and business services.Matrices are the classical form to represent these big dates,however,storage and computation of large-scale matrices are challenging tasks.Fortunately,many high-dimensional dates usually lie in low-dimensional manifolds in practical applications,which lead to large-scale matrices that can be approxi-mated by low-rank ones.Low-rank matrix approximation appears in countless applications,such as computational mathematics,statistics,genomics,text documents,social networks,machine learning,etc..Generally,low rank approximated models are formulated as an optimization problem.Matrix factorization strategy often leads the optimization model to be a nonconvex optimization problem.Designing efficient algorithms for the non-convex optimization problem is a research focus of optimization.In this dissertation,we proposed three effective algorithms for non-convex optimization models arising compute vision and image processing.In addition,randomized methods are very excellent schemes of large scale matrix to improve computational performance,which can bring down the order of computation complexity,we impoved a constraint condition used in building Stochastic Configuration Networks(SCNs).Firstly,a photon-limited image can be represented as a pixel matrix limited by the relatively small number of collected photons.The image can also be seen as being contaminated by Poisson noise because the total number of photons follows the Poisson distribution.Through exploitation of the inherent properties of observation combined with application of a denoising method,an image can be significantly restored.A hybrid clustering and low-rank regularization-based model(HCLR)is proposed based on the essential features of patch clustering and noise.An efficient Newton-type method is designed to optimize this biconvex problem.Experimental results demonstrate that HCLR achieves competitive de-noising per-formance,especially for high noise levels,compared with state-of-the-art Poisson denoising algorithms.Secondly,matrix completion is an important topic in data science and computer vision.In practice,matrix completion often encounters two intractable issues,one is that the observa-tions are often corrupted by various noises,the other one is that the distribution of the obser-vations' locations is unknown.We concentrate on the matrix completion problem with Poisson noise under non-uniform sampling,it is formulated by a maximum likelihood estimate with hybrids norm constraint(incorporating both max-norm and nuclear-norm).Further,a proximal alternating linearized minimization(PALM)algorithm is proposed,meanwhile we show that the PALM algorithm satisfies the global convergence property.Finally,experimental results of our algorithm demonstrate the good competitiveness in recovering real image.Thirdly,minimizing an objective function consisting of the sum of two functions,which may be data fidelity term or regularization term,has been a core problem in mathematical optimization.These objective functions are usually divided into convex functions and non-convex ones.We focus on the optimization nonconvex objective function consisting of a smooth data fidelity term and a multiplicative regularization term and propose an alternating proximal minimization algorithm.Based on the Kurdyka-Lojasiewicz(KL)property,we show that each bounded sequence generated by the algorithm converges to a critical point of the objective function.To illustrate our results,a specific example of matrix completion is provided,meanwhile numerical results on synthetic data and real images are shown.Finally,Stochastic Configuration Networks(SCNs)can be incrementally constructed by using supervisory mechanisms on the selection of random weights and biases.Due to its ease in implementation,fast training and less human intervention,SCNs become increas-ingly popular for large-scale data analytics.We aim to further study the existing constraint condition used in building SCNs.Two new inequality constraints on random parameters assignment are presented,and a theoretical guidance for the key parameter selection in these constraints is given.The newly proposed inequality constraints enlarge the probability of the constraint holding,which implies a quicker learning process.Experimental results with comparisons indicate that the proposed constraints can greatly reduce the search time for constructing the hidden nodes.
Keywords/Search Tags:Matrix factorization, Low rank approximation, Alternating minimization, Proximal algorithm, Neural network
PDF Full Text Request
Related items