Self-improved multi-view interactive knowledge transfer

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Saiji Fu , Haonan Wen , Xiaoxiao Wang , Yingjie Tian
{"title":"Self-improved multi-view interactive knowledge transfer","authors":"Saiji Fu ,&nbsp;Haonan Wen ,&nbsp;Xiaoxiao Wang ,&nbsp;Yingjie Tian","doi":"10.1016/j.inffus.2024.102718","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-view learning (MVL) is a promising data fusion technique based on the principles of consensus and complementarity. Despite significant advancements in this field, several challenges persist. First, scalability remains an issue, as many existing approaches are limited to two-view scenarios, making them difficult to extend to more complex multi-view settings. Second, implementing consensus principles in current techniques often requires adding extra terms to the model’s objective function or constraints, leading to increased complexity. Additionally, when applying complementarity principles, most studies focus on pairwise interactions between views, overlooking the benefits of deeper and broader multi-view interactions. To address these challenges, this paper proposes the multi-view interactive knowledge transfer (MVIKT) model, which enhances scalability by effectively managing interactions across multiple views, thereby overcoming the limitations of traditional two-view models. More importantly, MVIKT introduces a novel interactive knowledge transfer strategy that simplifies the application of the consensus principle by eliminating the need for additional terms. By treating margin distances as transferable knowledge and facilitating multiple rounds of interaction, MVIKT uncovers deeper complementary information, thereby improving the overall effectiveness of MVL. Theoretical analysis further supports the MVIKT model, demonstrating that transferring knowledge through margin distance is capable of lowering the upper bound of the generalization error. Extensive experiments across diverse datasets validate MVIKT’s superiority, showing statistically significant improvements over benchmark methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102718"},"PeriodicalIF":14.7000,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253524004962","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-view learning (MVL) is a promising data fusion technique based on the principles of consensus and complementarity. Despite significant advancements in this field, several challenges persist. First, scalability remains an issue, as many existing approaches are limited to two-view scenarios, making them difficult to extend to more complex multi-view settings. Second, implementing consensus principles in current techniques often requires adding extra terms to the model’s objective function or constraints, leading to increased complexity. Additionally, when applying complementarity principles, most studies focus on pairwise interactions between views, overlooking the benefits of deeper and broader multi-view interactions. To address these challenges, this paper proposes the multi-view interactive knowledge transfer (MVIKT) model, which enhances scalability by effectively managing interactions across multiple views, thereby overcoming the limitations of traditional two-view models. More importantly, MVIKT introduces a novel interactive knowledge transfer strategy that simplifies the application of the consensus principle by eliminating the need for additional terms. By treating margin distances as transferable knowledge and facilitating multiple rounds of interaction, MVIKT uncovers deeper complementary information, thereby improving the overall effectiveness of MVL. Theoretical analysis further supports the MVIKT model, demonstrating that transferring knowledge through margin distance is capable of lowering the upper bound of the generalization error. Extensive experiments across diverse datasets validate MVIKT’s superiority, showing statistically significant improvements over benchmark methods.
自我改进的多视角交互式知识传输
多视角学习(Multi-view Learning,MVL)是一种基于共识和互补原则的前景广阔的数据融合技术。尽管该领域取得了重大进展,但仍存在一些挑战。首先,可扩展性仍然是一个问题,因为许多现有方法仅限于双视角场景,很难扩展到更复杂的多视角场景。其次,在现有技术中实施共识原则往往需要在模型的目标函数或约束条件中添加额外的项,从而增加了复杂性。此外,在应用互补性原则时,大多数研究侧重于视图之间的成对互动,而忽略了更深层次和更广泛的多视图互动所带来的益处。为了应对这些挑战,本文提出了多视图互动知识转移(MVIKT)模型,该模型通过有效管理多视图之间的互动来增强可扩展性,从而克服了传统双视图模型的局限性。更重要的是,MVIKT 引入了一种新颖的交互式知识转移策略,通过消除对附加术语的需求来简化共识原则的应用。MVIKT 将边际距离视为可转移的知识,并促进多轮互动,从而发掘出更深层次的互补信息,提高了 MVL 的整体效果。理论分析进一步支持了 MVIKT 模型,证明通过边际距离转移知识能够降低泛化误差的上限。在各种数据集上进行的广泛实验验证了 MVIKT 的优越性,表明它在统计学上比基准方法有显著改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信