| Of the tens of millions of patients diagnosed with cancer each year,about 70%will require radiation therapy,and 40%will be completely cured of cancer through radiation therapy.However,the radiation not only kills cancer cells in the tumor area,but also may damage normal tissue areas.Therefore,imaging of the lesion area of the patient is usually performed before radiotherapy and tumor target areas and organs-at-risk are delineated.However,depending on manual labeling of radiotherapy target delineation requires doctors to devote a lot of working time.Designing an automatic segmentation algorithm for clinical radiotherapy target delineation will greatly reduce doctors’ work burden and accelerate the formulation and implementation of treatment plans.In order to solve the problems of insufficient accuracy and high training cost of current tumor radiotherapy target automatic segmentation algorithm,this thesis will optimize the characterization ability and operational efficiency of deep learning network according to its structural characteristics.The main work and achievements of this thesis are as follows:(1)Considering that the learning ability of convolutional neural networks to image features in the task of segmentation of organs-at-risk is limited by its convolution kernel receptive field,this thesis designs a multi-scale feature fusion mechanism to make up for the shortcomings of convolutional neural networks in learning global feature information.Considering that the neural network of Transformer structure has stronger ability in extracting remote feature dependence,Transformer feature extractor is added to the encoder based on convolutional structure in this thesis.In this way,the network can use both the global feature information and the local feature information of the image to achieve high precision automatic segmentation for the organs at risk in the radiotherapy target area.After experimental evaluation,the proposed method achieved an average Dice score of 77.72 in the segmentation of organs-at-risk of nasopharyngeal carcinoma and 91.84 in the segmentation task of organs-at-risk of lung cancer,both of which were better than the current mainstream segmentation algorithms.(2)In view of the fuzzy boundary of tumor region existing in the task of tumor segmentation,this thesis will use multi-modal medical image as the network input,and design the network structure of network level feature fusion,so that the network can make use of more abundant multi-modal feature information to determine the tumor region and boundary.In this thesis,a multi-branch feature extraction structure is designed to extract the feature information of different modes of medical images.In view of the computational load and geometric increase of parameter number caused by multibranch structure,this thesis uses dense connection to optimize and streamline the original Transformer structure neural network,so that it can still ensure strong image feature information extraction ability under the condition of consuming few resources.The results of experimental comparative analysis showed that the proposed method achieved Dice coefficient of 73.78 and HD95 of 7.81 in the head and neck nasopharyngeal cancer tumor segmentation task,which was better than the champion scheme based on this data set in 2020.Similarly,the segmentation accuracy achieved in the prostate tumor segmentation task was also better than the compared segmentation algorithm.The research results of this article further improve the prediction accuracy and computational efficiency of deep learning networks in radiotherapy target segmentation tasks,and are expected to be applied to the diagnostic system of the Radiotherapy Department of the First Affiliated Hospital of the University of Science and Technology of China in the near future.This will greatly shorten the time for doctors to delineate radiotherapy targets,promote the process of formulating radiotherapy plans,and benefit the vast number of radiotherapy patients. |