Stereotactic body-precision radiation therapy is one of the advanced technologies recognized in today’s society for the treatment of cancer.Image-guided device motion based on achieving patient positioning and localization prior to treatment to precisely align the tumor with the beam is a prerequisite for promoting the application of this technology.The accuracy of tumor position obtained by manual or automatic rigid image alignment before radiotherapy for lung tumors is low,resulting in large errors in image-guided target positions that require multiple verification adjustments and affect the treatment process.In this paper,based on a CT image segmentation network and a cone beam CT image alignment model,we propose a discriminative method for lung tumor position error that improves the accuracy of medical image-guided target location,reduces the number of corrections,and improves treatment efficiency.The main research contents and results of this paper are as follows:(1)The characteristics of stereotactic body radiation therapy for lung tumors are outlined,and the main medical equipment and specific radiotherapy procedures in stereotactic body radiation therapy are described.The current requirements for tumor position accuracy during stereotactic body radiation therapy and the main factors causing tumor positioning errors during the positional process are analyzed.The general scheme of the tumor position error discrimination method in stereotactic body radiation therapy is formulated according to the specific requirements of radiotherapy.(2)To achieve precise localization guided by images of lung lesions,a segmentation network WU-Net with a double contraction path structure is proposed for precise segmentation of critical lung nodules.This deep architecture is a modified body of the U-Net structure,which widens the encoder sub-network by superimposing the framework of the quadrilateral connection method so that the network can extract more effective image features.The unified fusion mechanism of the three types of resolution feature maps formed in the decoder subnetwork improves the performance of the model for processing feature maps.The shallow feature information of the image is fused by multi-scale jumps,so that the semantic gap between the encoder and decoder feature maps is reduced and the automatic and accurate subdivision of lung nodule boundaries is achieved.In this paper,the accuracy and feasibility of model segmentation are verified with the publicly available dataset of lung nodules,LIDC-IDRI.(3)An unsupervised alignment method based on adversarial learning is proposed in this paper for lung tumor stereotactic body radiation therapy that requires alignment of cone beam CT and CT images to localize the target area.The method designs the alignment network as a dual-input structure,i.e.,a generative network,to reduce the effect of gray-scale differences between cone-beam CT and CT images.The down-sampled feature maps in the network are fused with the sibling feature maps to improve the alignment performance of the network.Also,a cone beam CT and CT image similarity metric network is used as a discriminative network in the adversarial generation network to evaluate the matching between cone beam CT and aligned CT images by obtaining feature maps with multiple convolution depths.In this paper,lung tumor alignment experiments are performed on the 4D-Lung dataset,and the experimental results show that the alignment model has a better alignment effect.The image alignment method proposed in this paper can determine the precise location of tumors in cone-beam CT,providing a solution to improve the accuracy of image-guided posing and providing technical assistance for stereotactic body-precision radiation therapy. |