| With the increasing incidence and mortality of colorectal cancer(CRC),accurate prognostic analysis of patients is a key issue in current colorectal cancer research.Radiomics is a means of extracting key information contained in the region of interest(ROI)of radiological images and conducting comprehensive and systematic analysis.A common approach is to use texture features to assess the grayscale intensities and locations of pixels in an image,measuring intralesional heterogeneity.Deep learning discovers distributed representations contained in data features by transforming hierarchical features into more abstract high-level features or attributes.Compared with the texture features defined by human rules,the features obtained by deep learning can better describe the complex internal information and discover deeper features.Therefore,how to combine deep learning techniques and imaging histology to improve predictive accuracy and thus better enable predictive prognostic analysis of colorectal cancer patients.In this paper,we conducted a retrospective study of preoperative CT and clinicopathological data from colorectal cancer patients at the Fourth People’s Hospital of Wuxi,and designed a non-invasive and accurate framework for prognostic survival analysis of colorectal cancer patients based on imaging histology methods and deep learning techniques.The research contents are as follows:(1)Data processing,enrollment and statistical analysis: including data screening,data culling and data labeling.Screening was performed according to pathology,hierarchical grading,and clinical information records,and the information was accurate,whether CT data was complete,and other abdominal diseases.Finally,810 patients were included in the study.The clinical information of the research samples,including age,gender,pathological grade,tissue type,etc.,was statistically studied,and the effect of the clinical information of the patients on the survival prognosis was analyzed by Kaplan-Meier survival analysis and log-rank test.(2)A deep image group model was constructed for extracting deep self-learning highthroughput features(SHF)tags in 3D CT images.The model employs a codec framework that introduces an attention mechanism,uses a multi-task training mechanism that introduces reconstruction and perceptual loss to improve the quality and clarity of the image output during reconstruction,and achieves automatic learning of deep high-throughput feature labels of CT images.Subsequently,the extracted deep self-learning high-throughput feature labels are put into a Cox risk proportional model with the hand-textured features extracted with traditional methods for survival analysis feature validation.The results show that SHF outperforms the hand-textured features in terms of overall recognition ability and model accuracy.(3)A deep neural network multi-task logistic regression model was constructed for survival prediction analysis to further validate the stability of deep self-learning high-throughput features.The results on the test set are(SHF vs.texture features: C-index: 0.861 vs.0.630;IBS:0.024 vs.0.065),further confirming the superiority of SHF over texture features.Also to validate the clinical benefit of deep self-learning high-throughput features,four different decision curve models(Decision Curve Analysis,DCA)are constructed.The results show that deep self-learning high-throughput features have greater gains than manual texture features extracted by traditional methods and are more suitable for clinical use. |