Font Size: a A A

Analysis On Left Ventricle From Paired Apical Echocardiographic Image Sequence By Using Deep Learning

Posted on:2021-09-11Degree:DoctorType:Dissertation
Country:ChinaCandidate:R J GeFull Text:PDF
GTID:1484306557993309Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Two-dimensional echocardiography(echo)is the most frequently used medical imaging modality in evaluating the heart,due to its advantages of real-time,non-invasiveness,flexibility and low cost.Quantification and segmentation of the left ventricle(LV)in paired apical echo(apical two-chamber and four-chamber views)are important analysis for cardiac assessment.The paired apical echo squences provide cardiac activitym from multi-views and multidimensions.On the one hand,it can provide quantitative estimation of multi-dimensional LV indices in cardiac activity,including Long-axis Dimension(LAD),Short-axis Dimension(SAD),Area and volume.On the other hand,the segmentation of the LV delineates the shape of the LV in multiple-views to observe whether the anatomical mental state is abnormal from multiple-views.Analysis tasks of segmentation and quantification require clinical experts to manually zoom in the image to determine the LV region for contouring,and further determine the boiological sites such as the apex,mitral valve plane and etc.,for effective measurment.However,due to the inherent shortcomings of low signal-to-noise ratio and unclear edges in echo imaging,the segmentation and quantification tasks in clinical application are subjective,unrepeatable and high workload.Therefore,it is an urgent problem to realize automatic quantification and segmentation of LV from paired apical echo image sequences.The solution of this problem is of great significance for accurate and efficient clinical evaluation of LV function and morphology.Here,this dissertation studies the methods of direct quantification and segmentation of LV for apical two chamber view and four chamber view.The main research contents include developing the deep neural networks for the automatic and accurate direct quantification of LV,the multi-dimensional quantification of paired apical LV sequences,and the simultaneous segmentation and quantification of paired apical LV sequences.The main work and contributions are as follows:(1)A Global-Local LV Net is proposed for the direct quantification of left ventricle from echo.By using Global-Local LV Net(GL-LVNet),the direct quantification of LAD,SAD,area and volume of LV from an apical four chamber view echo image is achieved in end-to-end.GL-LVNet consists of three parts: Global echo module(GEM)is used to locate the LV in the whole image range;LV-Sample layer(LV-SL)is used to automatically crop the region of the LV,and meanwhile reconstruct and transfer in multi-resolution;local LV module(LLVM)is used to predict four types of indicators for direct regression in the left ventricular region by ensembling multi-scale spatial and structure informationn.Through the cascade mode,these three interrelated tasks are constrained to promote and feedback from the global positioning to the LV interpretation.GL-LVNet can effectively eliminate the complex interference of other structures in echo image,automatically focus on the target left ventricular region,and realize the direct quantification of LAD,SAD,area and volume.(2)A paired-views LV network is proposed for the direct quantification of multidimensional indices of left ventricle in paired apical echocardiographic sequences.By using Paired-views LV network(PV-LVNet),the synchronous direct quantification of seven different indices of multi-views(apical two-chamber,four-chamber,& joint apical views)and multiple dimensions(one-dimension,two-dimension,& three-dimension)is realized in end-toend.It is based on the newly designed Residual Circulation Network(Res-circle Net)for analysis of the patient characteristics and dynamic changes in echo image sequences.The Rescircle Net embeds both subject holistic characteristics and inter-frame changes of the sequences,by combining common subject-level base among frames and interrelated residuals of each frame.So that it promotes the accurate and consistent location and quantification of LVs in echo sequence.The PV-LVNet is integrated of three interdependent parts of LV location module(LVLM),LV-Crop Layer(LV-CL)and LV indices module(LVIM),for the location,cropping and indices regression of LVs in paired apical echo sequences.In LVLM,the Anisotropic Euclidean Distance(AED)is designed as the training loss of localization metric.Considering that the shape of LV in apical echo images is approximate to bullet,AED uses different scaled metrics on different directions to ensure robust and efficient location for continuous indices estimation.LV-CL is used to automatically crops LV region in a differentiable way to reduce the interference of various structures in paired views,and promote subsequent modules to focus on the target area and deliver quantitative feedback unimpeded.By designing inter-frames gradient regularization to guide the inter-frame changes of the predicted indices,LVIM can not only train and learn the indices value,but also take into account the fluctuation of indices.So that it further strengthen the estimation of sequence indices.(3)A K-shaped Unified Network is proposed to integrate the multi-task learning of left ventricle segmentation & quantification from paired apical echo sequences.By using K-shaped Unified Network(K-Net),the segmentation of LV from multi-views(apical two-chamber and four-chamber views)and the quantification of LV in multi-dimensions(LAD & SAD,area,volume)are simultaneously achieved in end-to-end.K-Net consists of four elements: K-shaped network structure,through the design of Attention Juction,interactively introduces information from segmentation to jointly promote spatial attention map to guide quantitative tasks to focus on the relevant areas of LV,and transfers quantification feedback to make global constraint on segmentation,so as to effectively integrate and promote learning of two heterogeneous tasks(segmentation task of pixel-wise classification,and direct quantification task of image-wise regression).Bi-Res LSTMs distributed in K-Net layer-bylayer hierarchically extract spatial-temporal information in echo sequence,with bidirectional recurrent and short-cut connection to model spatialtemporal information among all frames.The newly designed Information Valve makes the effective cross-flow between views by stimulating complementary information and suppressing redundant information.In the training,the novel Evolution Loss comprehensively guides sequential data learning,with static constraint for frame values,and dynamic constraint for inter-frame changes in segmentation and quantification.
Keywords/Search Tags:Left ventricle, echocardiographic image sequence, paired apical views, segmentation, direct quantification
PDF Full Text Request
Related items