Font Size: a A A

The Research Of Compressed Sensing Algorithm For CT Reconstruction

Posted on:2017-05-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:C ZhangFull Text:PDF
GTID:1108330482491312Subject:Optical Engineering
Abstract/Summary:PDF Full Text Request
In order to protect human bodies and the environment from the overdose of X-ray radiation, the computed tomography(CT) scanning system should be designed to reduce the X-ray dose for medical diagnosis. Simply, decreasing the sampling rate with the scanning time shorten is a direct and efficient method. However, the existing analytic-based algorithms are derived from a continuous imaging model so that it is in need of the dense sampled projections following the Shannon/Nyquist sampling theorem. When dealing with the under-sampled reconstruction problem, these algorithms result in the reconstructed image ruined by the aliasing artifacts, which cannot be used for medical diagnosis.Recently, the proposed compressed sensing(CS) theory, which can extract the sparsity of the reconstructed image and takes it as prior knowledge to solve the under-sampled reconstruction problem by iterative algorithms, observably improves the quality of the reconstructed image compared with the tranditional filtered back-projection(FBP) algorithm. Hence, this article focuses on studying the fast and stable reconstruction algorithms based on the CS theory by considering the characteristics of the CT scanning system. Our goal is to reconstruct the CT image efficiently and to struggle to make the compressed sensing algorithm for CT reconstruction from theory research to practical application.Hitherto, the majority of the CS algorithms take the total variation(TV) or the dictionary learning(DL) regularization terms as the sparse constraint. The ones with TV constraint are relatively mature. There are some problems that need to be solved in the DL-based reconstruction algorithms. For example, How to determine the regularization parameter? How to preserve the low-contrast detail information of the reconstructed image? How to improve the algorithm to satisfy the demand of further radiation reduction? Aiming at these problems, this article provides corresponding model and improved methods. The details are as follows:(1) A selection model of the regularization parameter is established by considerting the problem of determining the regularization parameter. The parameter changes according to the scanning protocol, the noise level and other factors. The proposed model firstly looks for an intermediate quantity that can be calculated by the scanning data and reflect the characteristics of the scanning data. Then the model function between the regularization parameter and the intermediate quantity is generated by function fitting method. The model reduces the time consuming to determine the value of the regularization parameter by repeated testing reconstruction steps. It improves the reconstruction efficiency and constructs the base for the subsequent research on dictionary learning algorithms.(2) In order to preserve more low-contrast structures and detail information in the reconstructed image, a weighted dictionary learning algorithm is proposed. Algorithms based on dictionary learning regularization term extract image patches from the image. All the patches are of the same scale and overlapped, which can be sparsely represented by the overcomplete dictionary to be trained. The algorithms take the sparse representations of the image patches as the regularization constraint, making the results converge to the reasonable feasible solution region. The weighted dictionary learning algorithm develops a weight function depending on the amount of detail information in one patch, preserving the detail information in the reconstructed image better. The experiment results proves that the proposed algorithm results in smaller normalized mean absolute deviation(NMAD), better image quality and higher resolution ratio of the image details, which is more beneficial for clinical diagnosis compared with the original dictionary learning algorithm.(3) To meet the demand of adapting to the scanning data with lower sampling rate, the dictionary learning algorithm based on L1-norm regularization term is proposed. The algorithm replaces the L2-norm regularization term with the L1-norm one benefited by the higher sparse level of the L1-norm constraint to meet the lower sampling rate. It is revealed that the proposed algorithm is more accurate than the algorithms compared especially when further reducing the sampling rate. The L1-norm regularization term improves the spatial resolution of the image and reduces the deviation between the reconstructed image and the ground truth.
Keywords/Search Tags:Computed Tomography, Compressed Sensing, Image Reconstruction, Regularization Parameter, Sparsity Constraint, Optimization Algorithm
PDF Full Text Request
Related items