| In cancer patients,the incidence of spinal metastases is high,and a suitable treatment can prolong the patient’s survival time.According to the six conditions about the spinal position of the tumor,the degree of collapse of the spine and the condition of the spinal line,the doctors need to score separately and determine the appropriate treatment according to the sum of the scores.The six scores are subjectively judged by the doctor.Different doctors may have different diagnosis results of a same patient,and the scoring process is time consuming and laborious,so computer aided diagnosis information is required.The diagnosis of spinal metastases is based on CT images.Computer vision algorithms can obtain lesion information of spinal metastases from CT images for auxiliary diagnosis.Because the spine metastasis tumor score is related to the location of the spinal tumor and the results of the spine segmentation,two important tasks are selected for the study:(1)spine segmentation and detection;(2)classification of spinal metastases based on CT images.This paper proposes a weakly supervised selflearning network model and a deep dual-view network model to accomplish the related tasks.In the spine segmentation and detection tasks,there are only the annotation data for the four corner points of the spine.This paper designs a weakly supervised selflearning model based on Mask-RCNN.In the process of training,the model uses the segmentation result of the prediction instead of the weakly annotation information to constrain the segmentation model.The segmentation and detection results are obtained by this self-learning method.According to the characteristics of the position distribution of the spine and the difference of adjacent vertebrae area,the spinal force line constraint and the adjacent frame constraint are designed to improve the accuracy of segmentation and detection.This article collects 400 images of spines,and the AP value of spines test reaches 95.13%.Secondly,this paper explores the classification of spinal metastases based on CT images.The doctors combine the sagittal and coronal information into a cross-section to diagnose the disease,and propose a deep dual-view network model.The model consists of two branches: an X-Y convolution branch is used to extract features for each crosssectional image separately,and a Z convolution branch is used to obtain features of the adjacent images in the Z direction.The results of the two branches are fused to form the final predictive value.And because lesions are spatial 3D objects,they exist in several consecutive images.Therefore,the smooth loss function is designed to constrain the classification results of adjacent images to be consistent.This paper collects a data set of spinal metastases from 316 patients.The AUC score of the experimental results reached91.34%,which proved the effectiveness of the proposed method. |