Xinwei Fu , Dan Song , Yue Yang , Yuyi Zhang , Bo Wang
{"title":"S2Mix: Style and Semantic Mix for cross-domain 3D model retrieval","authors":"Xinwei Fu , Dan Song , Yue Yang , Yuyi Zhang , Bo Wang","doi":"10.1016/j.jvcir.2025.104390","DOIUrl":null,"url":null,"abstract":"<div><div>With the development of deep neural networks and image processing technology, cross-domain 3D model retrieval algorithms based on 2D images have attracted much attention, utilizing visual information from labeled 2D images to assist in processing unlabeled 3D models. Existing unsupervised cross-domain 3D model retrieval algorithm use domain adaptation to narrow the modality gap between 2D images and 3D models. However, these methods overlook specific style visual information between different domains of 2D images and 3D models, which is crucial for reducing the domain distribution discrepancy. To address this issue, this paper proposes a Style and Semantic Mix (S<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>Mix) network for cross-domain 3D model retrieval, which fuses style visual information and semantic consistency features between different domains. Specifically, we design a style mix module to perform on shallow feature maps that are closer to the input data, learning 2D image and 3D model features with intermediate domain mixed style to narrow the domain distribution discrepancy. In addition, in order to improve the semantic prediction accuracy of unlabeled samples, a semantic mix module is also designed to operate on deep features, fusing features from reliable unlabeled 3D model and 2D image samples with semantic consistency. Our experiments demonstrate the effectiveness of the proposed S<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>Mix on two commonly-used cross-domain 3D model retrieval datasets MI3DOR-1 and MI3DOR-2.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"107 ","pages":"Article 104390"},"PeriodicalIF":2.6000,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320325000045","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
With the development of deep neural networks and image processing technology, cross-domain 3D model retrieval algorithms based on 2D images have attracted much attention, utilizing visual information from labeled 2D images to assist in processing unlabeled 3D models. Existing unsupervised cross-domain 3D model retrieval algorithm use domain adaptation to narrow the modality gap between 2D images and 3D models. However, these methods overlook specific style visual information between different domains of 2D images and 3D models, which is crucial for reducing the domain distribution discrepancy. To address this issue, this paper proposes a Style and Semantic Mix (SMix) network for cross-domain 3D model retrieval, which fuses style visual information and semantic consistency features between different domains. Specifically, we design a style mix module to perform on shallow feature maps that are closer to the input data, learning 2D image and 3D model features with intermediate domain mixed style to narrow the domain distribution discrepancy. In addition, in order to improve the semantic prediction accuracy of unlabeled samples, a semantic mix module is also designed to operate on deep features, fusing features from reliable unlabeled 3D model and 2D image samples with semantic consistency. Our experiments demonstrate the effectiveness of the proposed SMix on two commonly-used cross-domain 3D model retrieval datasets MI3DOR-1 and MI3DOR-2.
期刊介绍:
The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.