Bangzhen Liu;Yangyang Xu;Cheng Xu;Xuemiao Xu;Shengfeng He
{"title":"Open-Set Mixed Domain Adaptation via Visual-Linguistic Focal Evolving","authors":"Bangzhen Liu;Yangyang Xu;Cheng Xu;Xuemiao Xu;Shengfeng He","doi":"10.1109/TCSVT.2025.3551234","DOIUrl":null,"url":null,"abstract":"We introduce a new task, Open-set Mixed Domain Adaptation (OSMDA), which considers the potential mixture of multiple distributions in the target domains, thereby better simulating real-world scenarios. To tackle the semantic ambiguity arising from multiple domains, our key idea is that the linguistic representation can serve as a universal descriptor for samples of the same category across various domains. We thus propose a more practical framework for cross-domain recognition via visual-linguistic guidance. On the other hand, the presence of multiple domains also poses a new challenge in classifying both known and unknown categories. To combat this issue, we further introduce a visual-linguistic focal evolving approach to gradually enhance the classification ability of a known/unknown binary classifier from two aspects. Specifically, we start with identifying highly confident focal samples to expand the pool of known samples by incorporating those from different domains. Then, we amplify the feature discrepancy between known and unknown samples through dynamic entropy evolving via an adaptive entropies min/max game, enabling us to accurately identify possible unknown samples in a gradual manner. Extensive experiments demonstrate our method’s superiority against the state-of-the-arts in both open-set and open-set mixed domain adaptation.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"8495-8507"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10926517/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
We introduce a new task, Open-set Mixed Domain Adaptation (OSMDA), which considers the potential mixture of multiple distributions in the target domains, thereby better simulating real-world scenarios. To tackle the semantic ambiguity arising from multiple domains, our key idea is that the linguistic representation can serve as a universal descriptor for samples of the same category across various domains. We thus propose a more practical framework for cross-domain recognition via visual-linguistic guidance. On the other hand, the presence of multiple domains also poses a new challenge in classifying both known and unknown categories. To combat this issue, we further introduce a visual-linguistic focal evolving approach to gradually enhance the classification ability of a known/unknown binary classifier from two aspects. Specifically, we start with identifying highly confident focal samples to expand the pool of known samples by incorporating those from different domains. Then, we amplify the feature discrepancy between known and unknown samples through dynamic entropy evolving via an adaptive entropies min/max game, enabling us to accurately identify possible unknown samples in a gradual manner. Extensive experiments demonstrate our method’s superiority against the state-of-the-arts in both open-set and open-set mixed domain adaptation.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.