{"title":"Contrastive attention and fine-grained feature fusion for artistic style transfer","authors":"Honggang Zhao , Beinan Zhang , Yi-Jun Yang","doi":"10.1016/j.jvcir.2025.104451","DOIUrl":null,"url":null,"abstract":"<div><div>In contemporary image processing, creative image alteration plays a crucial role. Recent studies on style transfer have utilized attention mechanisms to capture the aesthetic and artistic expressions of style images. This method converts style images into tokens by initially assessing attention levels and subsequently employing a decoder to transfer the artistic style of the image. However, this approach often discards many fine-grained style elements due to the low semantic similarity between the original and style images. This may result in discordant or conspicuous artifacts. We propose MccSTN, an innovative framework for style representation and transfer, designed to adapt to contemporary arbitrary image style transfers as a solution to this problem. Specifically, we introduce the Mccformer feature fusion module, which integrates fine-grained features from content images with aesthetic characteristics from style images. Mccformer is utilized to generate feature maps. The target image is then produced by inputting the feature map into the decoder. We consider the relationship between individual styles and the overall style distribution to streamline the model and enhance training efficiency. We present a multi-scale augmented contrast module that leverages a substantial number of image pairs to learn style representations. Code will be posted on <span><span>https://github.com/haizhu12/MccSTN</span><svg><path></path></svg></span></div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"110 ","pages":"Article 104451"},"PeriodicalIF":2.6000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320325000653","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In contemporary image processing, creative image alteration plays a crucial role. Recent studies on style transfer have utilized attention mechanisms to capture the aesthetic and artistic expressions of style images. This method converts style images into tokens by initially assessing attention levels and subsequently employing a decoder to transfer the artistic style of the image. However, this approach often discards many fine-grained style elements due to the low semantic similarity between the original and style images. This may result in discordant or conspicuous artifacts. We propose MccSTN, an innovative framework for style representation and transfer, designed to adapt to contemporary arbitrary image style transfers as a solution to this problem. Specifically, we introduce the Mccformer feature fusion module, which integrates fine-grained features from content images with aesthetic characteristics from style images. Mccformer is utilized to generate feature maps. The target image is then produced by inputting the feature map into the decoder. We consider the relationship between individual styles and the overall style distribution to streamline the model and enhance training efficiency. We present a multi-scale augmented contrast module that leverages a substantial number of image pairs to learn style representations. Code will be posted on https://github.com/haizhu12/MccSTN
期刊介绍:
The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.