Font Size: a A A

Cross-Modular Digital Breast Tomosynthesis Feature Generation And Application Research In Breast Cancer Diagnosis

Posted on:2024-04-05Degree:MasterType:Thesis
Country:ChinaCandidate:J WangFull Text:PDF
GTID:2544307103969379Subject:Electronic information
Abstract/Summary:PDF Full Text Request
Breast cancer is a serious threat to human life and health.Early breast cancer screening can effectively reduce breast cancer mortality.Mammography(Mammography,MG)has the characteristics of convenient,non-invasive and low cost,and is currently one of the most commonly used images for early diagnosis of breast cancer in my country.However,MG imaging is poor in judging the lesions of dense glands.Digital Breast Tomosynthesis(DBT)provides structural information of the breast from different image slices and provides more lesion information.Studies have shown that doctors using DBT images in diagnosis can obtain a lower misdiagnosis rate than MG images.However,limited by the shooting conditions of DBT images in my country,it cannot be widely promoted in early screening.In this study,aiming at improving the early diagnosis value of MG images for breast cancer,the classification value of MG images is improved by means of DBT images with high information content,and a generative adversarial network model is designed to carry out cross-modal DBT image feature generation research.The specific research content includes the following parts:(1)Research on benign and malignant diagnosis of MG images and DBT images based on convolutional neural networkThe diagnostic performance of benign and malignant can reflect the classification value of the features extracted by the convolutional neural network.The MG image and the DBT image are respectively input into the deep residual network(Residual Network,Res Net)to extract features,and then put into the classifier network for further analysis.Classify benign and malignant,and obtain classification performance indicators.In the study,the fault information of the DBT image was fully utilized,and the optimal AUC values based on the Res Net network were finally 0.845 and 0.890,respectively.DBT image features have better classification performance,and the conclusion is consistent with the doctor’s diagnosis.In order to obtain better image features for subsequent research,this paper proposes a multiscale attention mechanism model to explore the performance of benign and malignant images.The multi-scale attention mechanism is based on the Res Net network,using the outputs of different Stage layers as feature maps,and passing in supplementary attention information to the Res Net network through encoders,attention pooling and other models.Using MG images and DBT images respectively into the model,the classification performance of the final model was greatly improved,with AUC values of 0.861 and 0.916,respectively.It shows that the multi-scale attention mechanism model can extract better image features.(2)Research on cross-modal DBT image feature generationIn order to generate DBT features with better classification performance from MG features,this study designed the network structure of the cross-modal DBT feature generation model.The model consists of encoder,decoder,generator and discriminator.The encoder can extract common features between MG and DBT features,reduce feature dimensions and reduce MG image noise.The decoder reconstructs the features generated by the encoder.The generator and discriminator approach MG features and DBT features through confrontational learning.The Fréchet Inception Distance(FID)was used to evaluate the feature generation quality of the generative model.In the selection of the generated confrontation network model,WGAN-GP(Wasserstein GAN with Gradient Penalty,WGAN-GP)and LSGAN(Least Squares Generative Adversarial Networks,LSGAN)models were compared,both of which achieved better generation quality.The LSGAN model is more prominent with an FID value of 10.769.Then compare the feature maps of generated features and real features,and test the mean,and there is no significant difference between the two(P-value=0.651>0.05).(3)Breast cancer diagnosis research and interpretability analysis based on generative featuresIn order to demonstrate that the generative model can improve the classification value of MG images,MG images are used as input,the common Res Net network is used as its feature extractor(AUC=0.845),and the encoder and generator parameters in the generative model are fixed for feature generation.The generated features are fed into the classifier network,and the final AUC value is0.878,which has a significant improvement(P-value=0.002).In the interpretability analysis,comparing the class activation maps of the convolutional network model and the generative model,it is found that the generative model can more accurately notice the lesion area and achieve better classification performance.Finally,several groups of different MG feature extractor experiments and component ablation experiments were designed.The results show that the generated model has strong versatility,and each component in the model can improve the quality of generated DBT features.This study proposes a cross-modal DBT feature generation and diagnostic application method.In the application stage,the method only inputs MG images with low information content,extracts image features,and generates DBT image features with higher information content through the crossmodal generative model.It is used for the classification of benign and malignant,and it has better classification performance than the traditional method of benign and malignant diagnosis based on MG images.The study is of great significance for early auxiliary diagnosis using MG images.
Keywords/Search Tags:Mammography, Digital Breast Tomography, attention mechanism, feature generation, benign and malignant diagnosis
PDF Full Text Request
Related items