| Semantic segmentation is one of the focuses in the field of image understanding.In recent years,the research mainly focuses on the structure of convolutional networks based on deep learning.In segmentation network,there are many problems,such as insufficient effective field of view,insufficient ability of context information aggregation and multiscale segmentation target.To solve the above problems,this paper improves the existing segmentation network from the aspects of attention and multi-scale feature fusion from the perspective of image local area information.The main research work is as follows:(1)An image semantic segmentation algorithm based on region compression matrix and integrated bi-path attention is proposed to focus on the aggregation of pixel information in different scale regions and the information association between these regions.The algorithm can integrate and utilize the regional context information under different scope constraints.Experiments show that the attention tuning method based on region feature information can improve the segmentation performance effectively.(2)In this paper,the regional attention module is improved from the aspects of regional compression coding,relational dependence modeling of non-local regional context,and multi-scale and detail feature balance,and more effective regional attention tuning and feature fusion can be achieved by combining self-attention and multi-scale cavity convolution.Experiments show that the segmentation performance of the improved network can be significantly improved.The main contribution and innovation of this paper is to improve the existing segmentation network based on regional compression matrix and integrated attention,starting with the information association of local regions.At the same time,in order to pay more attention to the non-local dependence of regional context information,self-attention computation is introduced from the aspects of compression coding and empty convolution,which greatly improves the segmentation performance of the network. |