| Esophageal cancer is one of the deadliest cancers,and it is listed as the eighth most common cancer.China is a high-incidence area of esophageal cancer and about half of the cases of esophageal cancer all over the world occur in China.In the treatment of patients with early esophageal cancer(T1 and T2),surgical resection results in the longest disease-free survival.While surgery is the treatment of choice for early stage(T1 and T2)esophageal cancer,patients with more advanced stages(T3 and T4)may benefit from neoadjuvant chemoradiation before surgery.Therefore,the preoperative T staging of esophageal cancer is key to patient treatment and survival.Magnetic resonance imaging(MRI)features with higher soft tissue contrast and can be used to assess the staging of esophageal cancer.Reading MRI images often takes a lot of time and the results depends heavily on the experience of the radiologist.Therefore,in this study,we tried to use artificial intelligence to study the automatic T staging of esophageal cancer from magnetic resonance images.First,we used radiomics to differentiate the early stages of esophageal cancer from the late stages.We extracted shape,gray and texture features from MRI images acquired with radial volumetric interpolated body examination(r-VIBE)sequence.After that,the boundary of the cancer foci outlined by the doctor was expanded,and the same features were extracted again from the expanded ROIs.Next,the shape feature and texture feature in the original ROI and the texture feature from the expanded ROI were used to build the classification model.Finally,the features selected by the above models were combined together for the final radiomics model.The final model selected three features,and achieved a test AUC of 0.765.Then,deep learning was used for the same classification task.A typical deep neural network,namely Res Net,was used.Since using 3D image as input placed a huge burden on the hardware,we tried out different pseudo-3D input to build the model and compared the influence of the scheme of the input to the performance of the model.We also transferred the idea of data augmentation to the prediction phase.The best model achieved a validation AUC of 0.777 and a test AUC of 0.703.After that,we combined radiomics and deep learning by using their respective output as input for a new classification model.The combined model achieved a validation AUC of 0.765 and a test AUC of 0.783,better than the above two models.Finally,in order to make our diagnostic process fully automatic,another deep neural network,namely Mask R-CNN,was used to detect and segment esophageal cancer.The AP of the trained model was 0.69,and the 3D DICE was 0.63.The result showed that deep learning has potential in detection and segmentation of esophageal cancer,and the model can be combined with the classification model to implement an automatic diagnosis process for computer-aided staging of esophageal cancer. |