| Liver cancer is one of the most common malignant tumors in the world,which is a serious threat to human life and health.The accurate segmentation of liver tumors from Computed Tomography(CT)is very important for clinical diagnosis at a later stage.With the rapid development of deep learning,medical image segmentation has made great progress.The location of tumors in liver CT images is random,with various shapes and sizes,and the contrast between liver and surrounding normal tissues and organs,as well as between tumor and liver normal tissues is low,resulting in blurred tumor boundaries.Restricted by these factors,many existing segmentation methods have problems such as insufficient segmentation,excessive segmentation and missing detection of small-scale tumors.Accurate segmentation of liver tumors remains a challenging task.This paper studies the above problems as follows:(1)Aiming at the problems of boundary blurring,missing detection of small-scale tumors in the liver tumor segmentation task,BBTUNet(Boundary Bridge Transformer UNet)is proposed for liver tumor segmentation network based on Transformer mechanism.Firstly,the skip connection structure is redesigned based on the Transformer mechanism,which effectively makes up for the shortcomings of long-dependent contextual feature extraction of UNet network structure.Secondly,a separable dilated convolution is introduced to propose a BFFN(Bridge Feedforward Neural Network)module,which fused the features of multi-scale receptive fields and refined the boundaries of liver tumors.The five evaluation indexes Dice,IOU,Acc,Sen and Spe reach 82.1%,74.8%,96.4%,78.7% and 96.1%,respectively,which are improved by 10.9%,6.8%,4.6%,8.4% and 5% compared with the original UNet network.It can effectively improve the segmentation performance of liver tumors and solve the problem of over-segmentation and under-segmentation.(2)In view of the random location of liver tumors,multi-scale problems,a liver tumor segmentation network(Context-prior Transformer Cross-attention-Net)Combining context prior and cross-attention is proposed based on Transformer context Bridge for further research.Firstly,a context prior layer is added in the encoder stage to effectively aggregate the multiscale features of the encoder,obtain rich intra-class and inter-class contextual information through affinity loss supervision,and obtain the context prior graph,which is embedded in the skip connection structure to guide the effective transmission of the features of the region of interest.Then,an efficient Transformer mechanism is used to build cross-attention and rebuild the skip connection structure.Cross-focus on the deep and shallow features of encoders and decoders,multi-level semantic features can be obtained to narrow the semantic differences between encoder and decoder and improve the segmentation performance of the network.The liver tumor segmentation experiment is conducted on the 3DIRCADB dataset.The experimental results shows that the five evaluation indexes of Dice,IOU,Acc,Sen and Spe reached 83.2%,75.3%,97.1%,79% and 96.7%,respectively,Compared with the original UNet network,are improved by 12%,7.3%,5.3%,8.7% and 5.6%,respectively,which effectively improved the segmentation effect of liver tumors. |