Font Size: a A A

Multi-scale Tensor Approximations And Applications

Posted on:2013-10-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:J R ShiFull Text:PDF
GTID:1228330395455450Subject:Intelligent information processing
Abstract/Summary:PDF Full Text Request
With fast development of sensor technology and storage technology, high-dimensional data with multi-linear structure are becoming very ubiquitous across thesciences and engineering. Multi-scale tensor approximations are powerful mathematicaltools for analyzing and processing tensor data. In the last ten years, interests inmulti-scale tensor approximations have been spread rapidly from the initialpsychometrics and chemometrics to other fields such as signal and image processing,computer vision, pattern recognition, data mining and machine learning. For tensor data,multi-scale approximations are usually employed to reduce dimensionality, extractfeatures and remove noise. In recent years, multi-scale tensor approximations haveachieved rich research results on algorithms and applications, but further study andsolution are needed for some problems such as effective algorithms to multi-scalenonnegative tensor approximations, algorithms to tensor completion and tensor-basedmetric learning. The dissertation studies multi-scale tensor approximation algorithmsand applications, and the main research work is in the following.1. We establish a model of multi-level nonnegative matrix factorization(MNMF)and construct the corresponding algorithm. Each sample needs to be represented by avector before performing nonnegative matrix factorization(NMF). For nonnegativetensor data, NMF does not consider the multi-linear property of each base. However,MNMF firstly reshapes base vectors obtained by NMF into base tensors. Then eachbase tensor is expressed by the set of sub-tensors along some mode. Next, NMF isperformed on each sub-tensor set. Repeat the above process until the bases do not havethe multi-linear property. As the high-order generalization of NMF, MNMF has betterperformance in data compression, sparse feature representation and denoising. Atop-down algorithm is proposed and it is improved by a multiplicative iterativealgorithm.2. NMF and nonnegative tensor factorization are applied to SAR imageclassification. Firstly, we propose the cost function of NMF by analyzing the noisemechanism of SAR images. Then a multiplicative update algorithm is proposed forNMF. Next, NMF model is extended to the case of nonnegative Tucker and nonnegativePARAFAC models, and the corresponding algorithms are presented. Experimentalresults on SAR image classification demonstrate that NMF not only has better classification performance compared to traditional subspace methods, but also obtainbetter local representation. Moreover, nonnegative tensor factorization has betterclassification performance, sparser local features and better compression ratio thanNMF.3. Based on the downsampling method, we propose a fast algorithm to multi-scalenonnegative tensor approximations(MNTA). The existing algorithms to MNTA mainlyutilize alternative methods with multiplicative update rules. Although the multiplicativeupdate algorithms are very simple to implementation, they usually converge very slowly.High-dimensional tensors bring great challenges to MNTA in computation and storage.A tensor can be regarded as the discretization of a multivariate continuous or piece-wisecontinuous function. According to the above assumption, a high-dimensional tensor isfirstly downsampled. Then MNTA algorithms are performed on the downsampled tensor.Finally, interpolation strategy is adopted to obtain the multi-scale features of thedownsampled tensor and the interpolated multi-scale features correspond to that of theoriginal tensor. Subsequently, the bound is provided for the approximation error of theaforementioned method. Moreover, the downsampling factor’s choice and block-basedMNTA are discussed further. The proposed downsampling method can reduce greatlythe dimensionality of high-dimensional tensors, which means that less computationcomplexity and storage are required.4. Algorithms to nonnegative matrix completion(NMC) and nonnegative tensorcompletion(NTC) are proposed. For a nonnegative matrix with missing elements, NMFis used to complete the matrix. The NMC problem is transformed into solvingalternatively two nonnegative least square(NNLS) problems. For each NNLS problem,the exact step size is chosen along the searching direction and the procedure has lowcomputation complexity. Moreover, the algorithm to NMC is extended to the case ofnonnegative tensor, that is, nonnegative Tucker approximation is employed to completea tensor with missing elements. When solving the NTC problem, it is transformed intosolving a series of special NMC problems. Experimental results demonstrate that theproposed NMC algorithm has better performance than the existing algorithms. Fornonnegative data with multi-linear structure, MTC has superiority of recoveryperformance over NMC.5. We construct an algorithm to tensor completion problem and apply it to facerecognition. In many applications, the tensors to be analyzed are usually low rank orapproximately low rank. The type of tensor has low freedom degree, so a tensor can berecovered by few entries. Based on low rank Tucker approximations, we design a tensor completion algorithm and prove its convergence. Experimental results show thesuperiority of the proposed algorithm, that is, it recovers perfectly the low rank tensorand removes effectively the noise of low rank tensor corrupted by Gaussian noise.Moreover, experiments on face recognition show the effectiveness and feasibility of theproposed method.6. The Mahalanobis distance is extended to the tensor case and we investigate howto learn the distance metric between high-dimensional tensor samples. Firstly, a newdistance metric, tensor-based Mahalanobis distance, is proposed. Then the distance islearned through solving a model of tensor-based maximally collapsing metric learning.Compared to traditional metric methods, the proposed metric learning technique has theadvantage of few parameters, which alleviates the curse of dimensionality andoverfitting to some extent. The learned Mahalanobis distance matrices are alsoemployed to perform dimensionality reduction.
Keywords/Search Tags:Tensor Approximation, Multi-scale, Nonnegative Tensor Approximation, Nonnegative Matrix Factorization, Multi-level Nonnegative Matrix Factorization, Downsampling, Nonnegative Matrix Completion, Nonnegative Tensor Completion, Tensor Completion
PDF Full Text Request
Related items