Font Size: a A A

The Research And Application Of Artificial Intelligence In The Detection And Classification Of Early Gastric Cancer Under Gastrointestinal Endoscopy

Posted on:2022-03-08Degree:DoctorType:Dissertation
Country:ChinaCandidate:D H TangFull Text:PDF
GTID:1484306725971719Subject:Clinical Medicine
Abstract/Summary:PDF Full Text Request
Aims: This study aims to develop artificial intelligence-assisted models based on deep convolutional neural networks(DCNN)to detect and classify early gastric cancer(EGC)under endoscopy and evaluate the clinical auxiliary value of these models.Firstly,we developed two DCNN models to detect EGC under white light imaging(WLI)and narrow band imaging-magnifying endoscopy(NBI-ME)to facilitate the full check of EGC lesions.Secondly,we developed three DCNN models to classify EGC lesions in invasion depth and differentiation status and delineate margins of EGC lesions,respectively.Thus,the EGC lesions can be diagnosed accurately.Thirdly,we fully verified the diagnostic performance of these models and evaluated the clinical auxiliary values of these models,thereby providing the initial supportive evidence for the subsequent clinical transformation of these models.Materials and methods:(1)All 45,240 endoscopic images from 1364 patients were divided into a training dataset(35823 images from 1085 patients)and a validation dataset(9417 images from 279 patients)in our center.Another 1514 images from three other hospitals were used as external validation.Another 26 videos from 26 patients were also included as the video validation in our center.A DCNN model was developed using the training dataset with the architecture of YOLOv3.We evaluated the diagnostic ability of the DCNN model in the validation datasets and the video dataset.Thereafter,we compared the diagnostic performance of the DCNN model with endoscopists,and then evaluated the performance of endoscopists with or without referring to the system.The accuracy,sensitivity,specificity,positive predictive value,negative predictive value,Area under curve(AUC),and Cohen's kappa coefficient were measured to assess the detection performance.(2)A total of 21785 NBI-ME images of 1240 EGC patients and 20 videos of 20 EGC patients from five centers were divided into a training dataset(13151 images of 810 patients),an internal validation dataset(7057 images of 283 patients),an external validation dataset(1577 images of 147 patients)and a video dataset(20 videos of 20 patients).A DCNN model was developed using the training dataset with the architecture of YOLOv3.We evaluated the diagnostic ability of the DCNN model with the validation datasets and the video dataset.Then,we compared the diagnostic performance of the DCNN model with endoscopists,and analyzed the performance of endoscopists with or without the assistance of the model.The accuracy,sensitivity,specificity,positive predictive value,negative predictive value,AUC,and Cohen's kappa coefficient were measured to assess the detection performance.(3)A total of 3407 WLI images of 666 gastric cancer(GC)patients(training dataset)from two centers were used to develop a DCNN model with the architecture of Res Net50 to determine the invasion depth of GC.Another 228 WLI images of 62 patients(testing dataset)were used to test the diagnostic performance of the model.The testing dataset and a video dataset(54 videos of 54 patients)were used to compare the diagnostic performance and consistency of endoscopists with or without referring to the DCNN model.Grit scores of endoscopists were collected and a correlation between Grit scores and diagnostic accuracy was analyzed with linear regression.The main outcomes were accuracy,sensitivity,specificity,positive predictive value,negative predictive value,AUC,Cohen's Kappa coefficient and correlation coefficient.(4)All retrospectively collected 3090 NBI images of 222 EGC patients from two centers were divided into a training dataset(2075 images of 87 patients),an internal validation dataset(351 images of 58 patients)and an external validation dataset(664 images of 77 patients)for the development of DCCN 1 model with the architecture of Res Net50 to identify the differentiation status of EGC.A total of 768 images of 83 patients were used to develop the DCNN2 model with architecture of Unet++ to delineate the margins of EGC lesions and 160 images of 49 patients were used for internal validation.We evaluated diagnostic performance and consistency of endoscopists with or without the assistance of DCCN 1 in identify the differentiation status of EGC with the internal validation dataset.For DCNN1 model,the main outcomes were accuracy,sensitivity,specificity,positive predictive value,negative predictive value,AUC,and Cohen's Kappa coefficient.For DCNN 2 model,the main outcomes were Intersection-over Union(Io U),Dice,precision and recall.Results:(1)The DCNN system showed good performance in EGC detection in validation datasets,with accuracy(85.1%–91.2%),sensitivity(85.9%–95.5%),specificity(81.7%–90.3%),and AUC(0.887–0.940).The DCNN system showed better diagnostic performance(accuracy,95.3%)than senior(accuracy,87.3%)and junior(accuracy,73.6%)endoscopists and improved the performance of endoscopists.The DCNN system was able to process oesophagogastroduodenoscopy(OGD)video streams to detect EGC lesions in real time with sensitivity of 88.5%.(2)The DCNN model showed good performance in EGC detection in validation datasets,with AUC(0.888–0.951).The model detected all the EGC lesions in video dataset.The DCNN model showed better diagnostic performance(AUC,0.959)than senior(AUC,0.842–0.880)and junior(AUC,0.777–0.812)endoscopists and improved the performance and consistency of endoscopists.The DCNN model was able to process OGD video streams to detect EGC lesions in real time with sensitivity of 88.5%.(3)The DCNN model discriminated intramucosal GC from advanced GC with an AUC of 0.942,a sensitivity of 90.5%,a specificity of 85.3%.The diagnostic performance of novice endoscopists was almost comparable to those of expert endoscopists with the use of the DCNN model(accuracy: 84.6% vs.85.5%,sensitivity: 85.7% vs.87.4%,specificity: 83.3% vs.83.0%).The mean pairwise kappa value of endoscopists was increased significantly with the use of the DCNN model.The diagnostic duration reduced considerably with the assistance of the DCNN model.The correlation between the perseverance of effort and diagnostic accuracy of endoscopists was diminished using the DCNN model.(4)The DCNN1 model showed good performance in identifying the differentiation status of EGC in validation datasets with AUC of 0.932,sensitivity of 90.9% and specificity of 91.5%.The DCNN2 model showed a good performance in delineating the margins with Dice of 0.818,precision of 0.69 and recall of 1.00 when Io U at 0.5.The DCNN1 showed a better performance than senior(sensitivity,94.7% and specificity,63.3%)and junior(sensitivity,68.5% and specificity,51.0%)endoscopists.The DCNN1 improved the performance of senior(sensitivity,97.7% and specificity,84.0%)and junior endoscopists(sensitivity,90.1% and specificity,71.6%).The consistency between endoscopists and pathological results were improved significantly with the assistance of DCNN1 model.Conclusions: The AI-assisted models based on DCNN showed good diagnostic performance in the detection and classification of EGC,and improved the diagnostic accuracy and consistency of endoscopists,which manifested a potential clinical value in improving the detection rate of EGC and facilitating the homogeneity of diagnosis.
Keywords/Search Tags:Artificial intelligence(AI), Endoscopy, Deep learning, Early gastric cancer, Detection, Classification
PDF Full Text Request
Related items