As an important visual medium in artistic creation,painting utilizes elements such as color,texture,and composition to convey the background and artistic style of the work.The artistic style of different painting genres possesses unique characteristics that distinguish them from one another.Image style transfer seeks to establish a statistical or mathematical model for this abstract concept of artistic style,thereby enabling computers to automatically convert any image into an image of a specific artistic style.However,a significant problem with existing image style transfer datasets is the presence of semantic differences between the two image domains.For instance,in the vangh2 photo dataset,not only is the content of the Van Gogh art style image domain inconsistent with that of the real scene image domain,but the pixel structures are also inconsistent.Unfortunately,existing methods have not addressed this issue,leading to inconsistent semantics between the stylized images and the target domain images.This limitation hinders the development of image style transfer to a certain extent.To improve the performance of image style transfer models,this paper focuses on studying the semantic differences between image domains.Against this background,we propose an image style transfer method that specifically addresses the semantic differentiation of samples within the dataset.The main contributions of this work are as follows:(1)In response to the issues of semantic and feature differences between image domains in current image style transfer datasets,a novel method for image style transfer based on modeling information between and within image domains is proposed.The proposed method introduces two key techniques to address these issues: semantic residual connections to enhance the modeling of semantic information between image domains and attention mechanisms to model global information of images,ultimately improving the quality of generated images.Experimental results on the vangh2 photo and selfie2 anime datasets demonstrate that our proposed method outperforms baseline models in both qualitative and quantitative evaluations.(2)However,in the aforementioned methods,semantic features are only utilized to enhance cycle consistency and cannot guarantee semantic consistency between the stylized images and the target domain images.Additionally,the applicability of attention mechanisms to image style transfer tasks is limited.Therefore,this paper proposes a novel image style transfer method based on modeling intra-domain semantic consistency.Firstly,this method combines the advantages of cycle consistency loss and semantic residual connection,and defines a semantic consistency loss to ensure semantic consistency between the stylized images and the target domain images.Secondly,a dual-stream dilated convolutional attention mechanism tailored for image style transfer tasks is designed.By simultaneously modeling the global information of image features and the weight information between channels,the model’s ability to model intra-domain style features is further enhanced.Experimental results on the vangh2 photo and selfie2 anime datasets demonstrate that the proposed method achieves significant improvements in both qualitative and quantitative evaluations compared to the aforementioned methods. |