自监督纹理滤波

IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Hao Jiang, Rongjia Zheng, Yongwei Nie, Chunxia Xiao, Wei-Shi Zheng, Qing Zhang
{"title":"自监督纹理滤波","authors":"Hao Jiang, Rongjia Zheng, Yongwei Nie, Chunxia Xiao, Wei-Shi Zheng, Qing Zhang","doi":"10.1145/3744899","DOIUrl":null,"url":null,"abstract":"Decomposing an image <jats:italic toggle=\"yes\">I</jats:italic> into the combination of structure <jats:italic toggle=\"yes\">S</jats:italic> and texture <jats:italic toggle=\"yes\">T</jats:italic> components is an important problem in computational photography and image analysis. Traditional solutions are basically non-learning based, because it is difficult to construct datasets containing ground-truth decompositions or find effective structure/texture supervisions. In this paper, we present a self-supervised framework for smoothing out textures while maintaining the image structures. At the core of our method is a texture-inversion observation — if structure <jats:italic toggle=\"yes\">S</jats:italic> and texture <jats:italic toggle=\"yes\">T</jats:italic> are well disentangled, then <jats:italic toggle=\"yes\">S</jats:italic> − <jats:italic toggle=\"yes\">T</jats:italic> will produce a texture-inverted image that is symmetric to the input image <jats:italic toggle=\"yes\">I</jats:italic> = <jats:italic toggle=\"yes\">S</jats:italic> + <jats:italic toggle=\"yes\">T</jats:italic> and the two will be visually highly similar, while for other conditions that structure and texture are not effectively separated, the generated texture-inverted images will be less similar to the input. Based on the observation, we propose to learn texture filtering from unlabeled data by encouraging the texture inverted image generated from the filtering output to be visually more similar to the input via contrastive learning. Experiments show that our method can robustly produce high-quality texture smoothing results, and also enables various applications.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"93 1","pages":""},"PeriodicalIF":7.8000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-supervised Texture Filtering\",\"authors\":\"Hao Jiang, Rongjia Zheng, Yongwei Nie, Chunxia Xiao, Wei-Shi Zheng, Qing Zhang\",\"doi\":\"10.1145/3744899\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Decomposing an image <jats:italic toggle=\\\"yes\\\">I</jats:italic> into the combination of structure <jats:italic toggle=\\\"yes\\\">S</jats:italic> and texture <jats:italic toggle=\\\"yes\\\">T</jats:italic> components is an important problem in computational photography and image analysis. Traditional solutions are basically non-learning based, because it is difficult to construct datasets containing ground-truth decompositions or find effective structure/texture supervisions. In this paper, we present a self-supervised framework for smoothing out textures while maintaining the image structures. At the core of our method is a texture-inversion observation — if structure <jats:italic toggle=\\\"yes\\\">S</jats:italic> and texture <jats:italic toggle=\\\"yes\\\">T</jats:italic> are well disentangled, then <jats:italic toggle=\\\"yes\\\">S</jats:italic> − <jats:italic toggle=\\\"yes\\\">T</jats:italic> will produce a texture-inverted image that is symmetric to the input image <jats:italic toggle=\\\"yes\\\">I</jats:italic> = <jats:italic toggle=\\\"yes\\\">S</jats:italic> + <jats:italic toggle=\\\"yes\\\">T</jats:italic> and the two will be visually highly similar, while for other conditions that structure and texture are not effectively separated, the generated texture-inverted images will be less similar to the input. Based on the observation, we propose to learn texture filtering from unlabeled data by encouraging the texture inverted image generated from the filtering output to be visually more similar to the input via contrastive learning. Experiments show that our method can robustly produce high-quality texture smoothing results, and also enables various applications.\",\"PeriodicalId\":50913,\"journal\":{\"name\":\"ACM Transactions on Graphics\",\"volume\":\"93 1\",\"pages\":\"\"},\"PeriodicalIF\":7.8000,\"publicationDate\":\"2025-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Graphics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3744899\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Graphics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3744899","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

将图像I分解为结构S和纹理T分量的组合是计算摄影和图像分析中的一个重要问题。传统的解决方案基本上是非基于学习的,因为很难构建包含ground-truth分解的数据集或找到有效的结构/纹理监督。在本文中,我们提出了一种自监督框架,用于在保持图像结构的同时平滑纹理。我们的方法的核心是纹理反转观察,如果结构S和纹理T很好地解离,那么S - T将产生与输入图像I = S + T对称的纹理反转图像,两者在视觉上高度相似,而对于结构和纹理没有有效分离的其他条件,生成的纹理反转图像与输入图像的相似度会降低。在观察的基础上,我们提出从未标记的数据中学习纹理滤波,通过对比学习,鼓励从滤波输出生成的纹理倒转图像在视觉上与输入更相似。实验表明,该方法能够鲁棒地产生高质量的纹理平滑结果,并能实现多种应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Self-supervised Texture Filtering
Decomposing an image I into the combination of structure S and texture T components is an important problem in computational photography and image analysis. Traditional solutions are basically non-learning based, because it is difficult to construct datasets containing ground-truth decompositions or find effective structure/texture supervisions. In this paper, we present a self-supervised framework for smoothing out textures while maintaining the image structures. At the core of our method is a texture-inversion observation — if structure S and texture T are well disentangled, then ST will produce a texture-inverted image that is symmetric to the input image I = S + T and the two will be visually highly similar, while for other conditions that structure and texture are not effectively separated, the generated texture-inverted images will be less similar to the input. Based on the observation, we propose to learn texture filtering from unlabeled data by encouraging the texture inverted image generated from the filtering output to be visually more similar to the input via contrastive learning. Experiments show that our method can robustly produce high-quality texture smoothing results, and also enables various applications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Transactions on Graphics
ACM Transactions on Graphics 工程技术-计算机:软件工程
CiteScore
14.30
自引率
25.80%
发文量
193
审稿时长
12 months
期刊介绍: ACM Transactions on Graphics (TOG) is a peer-reviewed scientific journal that aims to disseminate the latest findings of note in the field of computer graphics. It has been published since 1982 by the Association for Computing Machinery. Starting in 2003, all papers accepted for presentation at the annual SIGGRAPH conference are printed in a special summer issue of the journal.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信