Learning Transferable Conceptual Prototypes for Interpretable Unsupervised Domain Adaptation

Junyu Gao;Xinhong Ma;Changsheng Xu
{"title":"Learning Transferable Conceptual Prototypes for Interpretable Unsupervised Domain Adaptation","authors":"Junyu Gao;Xinhong Ma;Changsheng Xu","doi":"10.1109/TIP.2024.3459626","DOIUrl":null,"url":null,"abstract":"Despite the great progress of unsupervised domain adaptation (UDA) with the deep neural networks, current UDA models are opaque and cannot provide promising explanations, limiting their applications in the scenarios that require safe and controllable model decisions. At present, a surge of work focuses on designing deep interpretable methods with adequate data annotations and only a few methods consider the distributional shift problem. Most existing interpretable UDA methods are post-hoc ones, which cannot facilitate the model learning process for performance enhancement. In this paper, we propose an inherently interpretable method, named Transferable Conceptual Prototype Learning (TCPL), which could simultaneously interpret and improve the processes of knowledge transfer and decision-making in UDA. To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process. With the learned transferable prototypes, a self-predictive consistent pseudo-label strategy that fuses confidence, predictions, and prototype information, is designed for selecting suitable target samples for pseudo annotations and gradually narrowing down the domain gap. Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts. Code is available at \n<uri>https://drive.google.com/file/d/1b1EHFghiF1ExD-Cn1HYg75VutfkXWp60/view?usp=sharing</uri>\n.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5284-5297"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10684084/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Despite the great progress of unsupervised domain adaptation (UDA) with the deep neural networks, current UDA models are opaque and cannot provide promising explanations, limiting their applications in the scenarios that require safe and controllable model decisions. At present, a surge of work focuses on designing deep interpretable methods with adequate data annotations and only a few methods consider the distributional shift problem. Most existing interpretable UDA methods are post-hoc ones, which cannot facilitate the model learning process for performance enhancement. In this paper, we propose an inherently interpretable method, named Transferable Conceptual Prototype Learning (TCPL), which could simultaneously interpret and improve the processes of knowledge transfer and decision-making in UDA. To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process. With the learned transferable prototypes, a self-predictive consistent pseudo-label strategy that fuses confidence, predictions, and prototype information, is designed for selecting suitable target samples for pseudo annotations and gradually narrowing down the domain gap. Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts. Code is available at https://drive.google.com/file/d/1b1EHFghiF1ExD-Cn1HYg75VutfkXWp60/view?usp=sharing .
学习可迁移的概念原型,实现可解释的无监督领域适应性
尽管深度神经网络在无监督领域适应(UDA)方面取得了巨大进步,但目前的 UDA 模型并不透明,无法提供有前景的解释,限制了其在需要安全可控的模型决策场景中的应用。目前,大量工作都集中在设计具有充分数据注释的深度可解释方法上,只有少数方法考虑了分布转移问题。大多数现有的可解释 UDA 方法都是事后方法,无法促进模型学习过程以提高性能。在本文中,我们提出了一种名为 "可迁移概念原型学习"(TCPL)的内在可解释方法,它可以同时解释和改进 UDA 中的知识迁移和决策过程。为了实现这一目标,我们设计了一个分层原型模块,将分类基本概念从源领域转移到目标领域,并学习领域共享原型来解释基本推理过程。利用学习到的可转移原型,我们设计了一种融合置信度、预测和原型信息的自预测一致性伪标注策略,用于选择合适的目标样本进行伪标注,并逐步缩小领域差距。综合实验表明,所提出的方法不仅能提供有效、直观的解释,而且性能优于以往的先进技术。代码见 https://drive.google.com/file/d/1b1EHFghiF1ExD-Cn1HYg75VutfkXWp60/view?usp=sharing。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信