Incomplete multi-view partial multi-label classification via deep semantic structure preservation

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Chaoran Li, Xiyin Wu, Pai Peng, Zhuhong Zhang, Xiaohuan Lu
{"title":"Incomplete multi-view partial multi-label classification via deep semantic structure preservation","authors":"Chaoran Li, Xiyin Wu, Pai Peng, Zhuhong Zhang, Xiaohuan Lu","doi":"10.1007/s40747-024-01562-5","DOIUrl":null,"url":null,"abstract":"<p>Recent advances in multi-view multi-label learning are often hampered by the prevalent challenges of incomplete views and missing labels, common in real-world data due to uncertainties in data collection and manual annotation. These challenges restrict the capacity of the model to fully utilize the diverse semantic information of each sample, posing significant barriers to effective learning. Despite substantial scholarly efforts, many existing methods inadequately capture the depth of semantic information, focusing primarily on shallow feature extractions that fail to maintain semantic consistency. To address these shortcomings, we propose a novel Deep semantic structure-preserving (SSP) model that effectively tackles both incomplete views and missing labels. SSP innovatively incorporates a graph constraint learning (GCL) scheme to ensure the preservation of semantic structure throughout the feature extraction process across different views. Additionally, the SSP integrates a pseudo-labeling self-paced learning (PSL) strategy to address the often-overlooked issue of missing labels, enhancing the classification accuracy while preserving the distribution structure of data. The SSP model creates a unified framework that synergistically employs GCL and PSL to maintain the integrity of semantic structural information during both feature extraction and classification phases. Extensive evaluations across five real datasets demonstrate that the SSP method outperforms existing approaches, including lrMMC, MVL-IV, MvEL, iMSF, iMvWL, NAIML, and DD-IMvMLC-net. It effectively mitigates the impacts of data incompleteness and enhances semantic representation fidelity.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"32 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01562-5","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advances in multi-view multi-label learning are often hampered by the prevalent challenges of incomplete views and missing labels, common in real-world data due to uncertainties in data collection and manual annotation. These challenges restrict the capacity of the model to fully utilize the diverse semantic information of each sample, posing significant barriers to effective learning. Despite substantial scholarly efforts, many existing methods inadequately capture the depth of semantic information, focusing primarily on shallow feature extractions that fail to maintain semantic consistency. To address these shortcomings, we propose a novel Deep semantic structure-preserving (SSP) model that effectively tackles both incomplete views and missing labels. SSP innovatively incorporates a graph constraint learning (GCL) scheme to ensure the preservation of semantic structure throughout the feature extraction process across different views. Additionally, the SSP integrates a pseudo-labeling self-paced learning (PSL) strategy to address the often-overlooked issue of missing labels, enhancing the classification accuracy while preserving the distribution structure of data. The SSP model creates a unified framework that synergistically employs GCL and PSL to maintain the integrity of semantic structural information during both feature extraction and classification phases. Extensive evaluations across five real datasets demonstrate that the SSP method outperforms existing approaches, including lrMMC, MVL-IV, MvEL, iMSF, iMvWL, NAIML, and DD-IMvMLC-net. It effectively mitigates the impacts of data incompleteness and enhances semantic representation fidelity.

Abstract Image

通过深度语义结构保护实现不完整多视角部分多标签分类
由于数据收集和人工标注的不确定性,真实世界数据中普遍存在视图不完整和标签缺失的问题,这往往阻碍了多视图多标签学习的最新进展。这些挑战限制了模型充分利用每个样本的不同语义信息的能力,对有效学习构成了重大障碍。尽管学者们做出了大量的努力,但许多现有的方法都没有充分捕捉到语义信息的深度,主要集中在无法保持语义一致性的浅层特征提取上。为了解决这些缺陷,我们提出了一种新颖的深度语义结构保护(SSP)模型,它能有效地解决视图不完整和标签缺失的问题。SSP 创新性地采用了图约束学习(GCL)方案,确保在整个特征提取过程中跨不同视图保留语义结构。此外,SSP 还集成了伪标签自定步调学习(PSL)策略,以解决经常被忽视的标签缺失问题,在提高分类准确性的同时保留数据的分布结构。SSP 模型创建了一个统一的框架,协同使用 GCL 和 PSL,在特征提取和分类阶段保持语义结构信息的完整性。对五个真实数据集的广泛评估表明,SSP 方法优于现有方法,包括 lrMMC、MVL-IV、MvEL、iMSF、iMvWL、NAIML 和 DD-IMvMLC-net。它有效地减轻了数据不完整性的影响,并提高了语义表示的保真度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信