Font Size: a A A

Several Studies On Brain Image Analysis

Posted on:2016-08-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:M Y HuangFull Text:PDF
GTID:1108330482456601Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
Roentgen discovered x-rays in 1895, and then he won the first Nobel Prize for physics in 1901. Hounsfield and Cormack invented the Computerized Tomography, and they won the Nobel Prize for medicine and physiology in 1979. Moreover, Lauterbur and Mansfied invented the Magnetic Resonance Imaging, and they won the Nobel Prize for medicine and physiology in 2003. During the past 100 years, more and more medical images are produced with the development of medical imaging technology, which contributes to the popularity of medical image processing and analysis.Medical image segmentation is a complex and important step in medical image processing and analysis. The aim of medical image segmentation is to isolate the objects from their background, and then to extract some distinguished features from the objects to provide reliable information and assist doctors to make diagnosis. During the segmentation, a lot of problems, such as nonuniformity of images and individual differences, have to be solved due to the complexity of medical images and the characteristics of the medical imaging technology. Therefore, a segmentation method used for natural imags is difficult to be applied for medical images. Moreover, there is no a common segmentation method can be used in different medical image segmentation task. In this study, we proposed two learning-based segmentation methods for two specific segmentation tasks:(1) We proposed a Locally Linear Representation-based Classification (LLRC) method for brain extraction. Brain extraction is an important procedure in brain image analysis. The manual delineation of the brain is time consuming and suffers from inter-operator variations. Therefore, semi-automated and automated brain extraction methods are more preferred than manual delineation. Although numerous brain extraction methods have been presented, enhancing brain extraction methods remain challenging because brain MRI images exhibit complex characteristics, such as anatomical variability and intensity differences across different sequences and scanners. Most existing brain extraction methods often need to be tuned to work on a certain type of study or a certain population. Hence, a reliable and robust method that is capable of working on a variety of different brain morphologies and acquisition sequences would be highly desired in neuroimaging studies. To address this problem, we proposed the LLRC method for brain extraction. A novel classification framework is derived by introducing the locally linear representation to the classical classification model. Under this classification framework, a common label fusion approach can be considered as a special case and thoroughly interpreted. Locality is important to calculate fusion weights for LLRC; this factor is also considered to determine that Local Anchor Embedding is more applicable in solving locally linear coefficients compared with other linear representation approaches. Moreover, LLRC supplies a way to learn the optimal classification scores of the training samples in the dictionary to obtain accurate classification. The International Consortium for Brain Mapping and the Alzheimer’s Disease Neuroimaging Initiative databases were used to build a training dataset containing 70 scans. To evaluate the proposed method, we used four public available datasets (IBSR1, IBSR2, LPBA40, and ADNI3T, with a total of 241 scans). Experimental results demonstrate that the proposed method outperforms the four common brain extraction methods (BET, BSE, GCUT, and ROBEX).(2) We proposed a Local Independent Projection-based Classification (LIPC) method for brain tumor segmentation. Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Apart from being time consuming, manual brain tumor delineation is difficult and depends on the individual operator. Thus, designing of a semi-automatic or automatic brain tumor segmentation approach is necessary to provide an acceptable performance. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we proposed the LIPC method for brain tumor segmentation. We assumed that different samples from different classes are located on different non-linear submanifolds. Based on this assumption, test samples are independently projected to the sample space of different classes. Then, reconstruction errors are used as classification measurement. In LIPC, Local Anchor Embedding (LAE) method is used to calculate the fusion weights of training samples. Moreover, a softmax model is used to determine the relationship between data distribution and reconstruction error norm. LIPC were evaluated by using both simulated data and real brain tumor images. The experimental results show that considering the distribution of different classes can further improve the brain tumor segmentation accuracy. For the real data,80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The experimental results are comparable to other state-of-the-art methods.In the medical field, digital images are produced every day and are used by radiologists to make diagnoses. However, searching for images with the same anatomic regions or similar-appearing lesions according to their visual contents in a large image dataset is difficult. Content-based image retrieval (CBIR) is presented as a possible and promising solution to indexing images with minimal human intervention. In general, feature extraction and distance metrics in the feature space are two crucial factors for CBIR. Since the variety and the complexity of medical images, low-level visual features, such as color (intensity), texture and shape, are not adequately discriminative to describe high-level semantic concepts. Therefore, additional distinctive features are highly desirable. Moreover, if the visual features are directly used to compute the image relevance, the performance of the CBIR system may decrease because the low-level image features cannot always capture the semantic concepts in the images. Therefore, a learned distance metric is used to map visual features to a new space, which is capable of reducing the semantic gap between visual features and semantic concepts. The contributions of the current study can be summarized as follows:(1) A partition learning algorithm is proposed based on the concept that the best partition will lead to the largest difference among the BoVW histograms of the sub-regions. This idea allows the region with variable appearance to be partitioned and image contents in each partitioned sub-region to be consistent; thus, combinational histograms of sub-regions can bring more discriminative information. We present a novel objective function of the partition learning method, provide an optimization approach, and evaluate the effectiveness of the method in the CBIR of brain tumor T1-weighted CE-MR Images. Experimental results demonstrate that the retrieval scores of the partition learning method are higher than those of the spatial pyramid method.(2) A distance metric learning approach, called Rank Error-based Metric Learning (REML), is introduced to reduce the semantic gap between high-level semantic concepts and low-level visual features in the proposed CBIR system. A novel objective function that integrates rank error is proposed, and a stochastic gradient descent-based optimization strategy is presented to find the optimal solution of the objective function. REML can project image features to a low-dimensional feature space, where the learned distance is expected to reflect the differences between the semantic concepts. Experimental results show that the proposed REML method outperforms to other common distance metric methods, such as Euclidean distance, CFML, LFDA, and MPP methods.With the advent of both imaging and genotyping techniques, many large biomedical studies have been conducted to collect imaging and genetic data and associated data (e.g., clinical data) from increasingly large cohorts in order to delineate the complex genetic and environmental contributors to many neuropsychiatric and neurodegenerative diseases. Understanding such genetic and environmental factors is an important step for the development of urgently needed approaches to the prevention, diagnosis, and treatment of these complex diseases. Several major big-data challenges arise from testing genome-wide (larger than 12 million known variants) associations with signals at millions of locations in the brain from thousands of subjects. To solve these problem, a Fast Voxelwise Genome Wide Association Study (FVGWAS) framework is proposed to efficiently carry out voxelwise genome wide association study.(1) A heteroscedastic linear model is used, which does not assume the presence of homogeneous variance across subjects and allows for a large class of distributions in the imaging data. These features are desirable for the analysis of imaging measurements, because between-subject and between-voxel variability in the imaging measures can be substantial and the distribution of the imaging data often deviates from the Gaussian distribution.(2) An efficient global sure independence screening (GSIS) procedure based on global Wald-test statistics is developed. Under the GSIS procedure, the size of search space is dramatically reduced from NcNv, to NoNv, in which No<<Nc.(3) Wild-bootstrap methods is used to test hypotheses of interest associated with image and genetic data. In addition, the wild bootstrap methods do not involve repeated analyses of simulated datasets and therefore is computationally cheap. Moreover, the wild bootstrap method requires neither complete exchangeability associated with the standard permutation methods nor strong assumptions associated with random field theory.Simulation studies show that FVGWAS is an efficient method of searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. Finally, we have successfully applied FVGWAS to a large-scale imaging genetic data analysis of ADNI data with 708 subjects,193,275 voxels in RAVEN maps, and 501,584 SNPs, and the total processing time was 203,645 seconds for a single CPU. Our FVGWAS may be a valuable statistical toolbox for large-scale imaging genetic analysis as the field is rapidly advancing with ultra-high-resolution imaging and whole-genome sequencing.
Keywords/Search Tags:Image segmentation, Classification, Image retrieval, Genome wide association study
PDF Full Text Request
Related items