Font Size: a A A

Research On Sparse Reconstruction For Kronecker Compressive Sensing

Posted on:2019-06-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:R Q ZhaoFull Text:PDF
GTID:1368330590972865Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Compressive sensing(CS)technique aims to compressively sample and reconstruct signals based on sparse representation to reduce the cost of signal transmission and storage,and it has the potential to be used in the fields like signal processing.Traditional CS mainly focuses on the 1-dimensional(1D)case where a global sensing matrix is used to compress and reconstruct an 1D signal.However,in many applications,the original signals are multi-dimensional,i.e.,tensors,such as images and videos.In order to reduce the burden of sampling,in multi-dimensional CS,a set of sensing matrices are employed for each dimension of original signals,respectively.This approach leads to the Kronecker structure of the corresponding global sensing matrix,so that it is always referred to as Kronecker CS(KCS).Given the sensing matrices and measurements,the reconstruction accuracy of KCS process is mainly determined by two aspects: sparse representation and reconstruction algorithm.Under the KCS framework,it is required to select proper dictionaries for each dimension,respectively,in order to utilize the compressibility of the data in all dimensions.In such case,the sampling process and sparse representation can be expressed as the model based on Tucker-decomposition(TD).For better sparse representation,TDbased dictionary learning is encouraged by using multi-dimensional samples.On the other hand,the reconstruction algorithms for KCS are challenged by reconstruction accuracy and computational complexity.This paper mainly focuses on the dictionary learning and sparse reconstruction under the KCS framework,aiming to improve the reconstruction accuracy and speed.The contributions are summarized as follows:We propose multi-dimensional dictionary learning algorithm by using multi-dimensional samples directly,named Tucker-decomposition-based method of optimal directions(TdMOD).The TdMOD utilizes multi-dimensional samples to jointly train all the dictionaries,rather than using the data in each dimension to train each dictionary.Hence,the dictionaries could jointly represent the tensor structures.Experiment results on real multidimensional data demonstrate the fact that the dictionaries trained by TdMOD could provide more accurate sparse representations of multi-dimensional signals with less training time,comparing to the dictionaries trained by traditional methods.We propose the TD-based online dictionary learning(TODL)algorithm.The existing tensor dictionary learning algorithms are limited by the fact that they require the training samples to be input simultaneously.However,in applications,the samples may be dynamic.To address this problem,we develop a TD-based strategy to achieve a warm start for dictionary update.As a result,only newly-input sample is required for retraining dictionaries,and the information of already-input samples can also be preserved by the information-storing variables.We propose the double-stage tensor matching pursuit(DsTMP)algorithm composed of a sketching stage and a pinpointing stage.The sketching stage can find a near-optimum greedy solution for pinpointing stage based on cross validation,and the precise search is executed in pinpointing stage.Both the theoretical analysis and simulation experiments demonstrate that,in comparison with the conventional tensor greedy algorithms,the proposed algorithm significantly reduces the required number of iterations and improves reconstruction speed,without losses of reconstruction accuracy.We propose a novel Bayesian reconstruction method based on the Laplace prior to exploit multi-dimensional block sparsity rather than vector-based sparsity.The Laplace prior distributions are employed for the sparse coefficients in each dimension,and their coupling is consistent with the multi-dimensional block-sparsity model.Based on the proposed model,we develop a tensor-based Bayesian reconstruction algorithm,which decouples the hyperparameters for each dimension with low-complexity.The proposed method is able to provide more accurate reconstruction than existing Bayesian methods at a satisfactory speed.We combine the proposed online dictionary learning methods and reconstruction algorithms to develop a KCS approach with adaptive sparse representation,named adaptive KCS(AKCS).In AKCS process,the objective signals are divided into compressivelysampled signals and holo-sampled signals.The holo-sampled signals are employed for updating dictionaries.AKCS achieves dynamic update of dictionaries without prior information of objective signals,and the performance of sparse representation can be improved as the number of already-input signals increase.
Keywords/Search Tags:compressive sensing, sparse representation, dictionary learning, Tucker decomposition, Kronecker product, sparse reconstruction
PDF Full Text Request
Related items