Model Rectification With Simultaneous Incremental Feature and Partial Label Set.

IF 18.6
Xijia Tang, Chao Xu, Chenping Hou
{"title":"Model Rectification With Simultaneous Incremental Feature and Partial Label Set.","authors":"Xijia Tang, Chao Xu, Chenping Hou","doi":"10.1109/TPAMI.2025.3600033","DOIUrl":null,"url":null,"abstract":"<p><p>Traditional classification problems assume that features and labels are fixed. However, this assumption is easily violated in open environments. For example, the exponential growth of web pages leads to an expanding feature space with the accumulation of keywords. At the same time, rapid refresh makes it difficult to obtain accurate labels for web pages, often resulting in rough annotations containing potentially correct labels, i.e., partial label set. In such cases, the coupling between the incremental feature space and the partial label set introduces more complex real-world challenges, which deserve attention but have not been fully explored. In this paper, we address this issue by introducing a novel incremental learning approach with Simultaneous Incremental Feature and Partial Label (SIFPL). SIFPL models the data evolution in dynamic and open environments in a two-stage way, consisting of a previous stage and an adapting stage, to deal with the associated challenges. Specifically, to ensure the reusability of the model during adaptation, we impose classifier consistency constraints to enhance the stability of the current model. This constraint leverages historical information from the previous stage to improve the generalization ability of the current model, providing a reliable foundation for further refining the model with new features. Regarding label disambiguation, we filter out incorrect candidate labels based on the principle of minimizing classifier loss, ensuring that the new features and labels effectively support the model's adaptation to the incremental feature space, thereby further refining its performance. Furthermore, we also provide a solid theoretical analysis of the model's generalization bounds, which can validate the efficiency of model inheritance. Experiments on benchmark and real-world datasets validate that the proposed method achieves better accuracy performance than the baseline methods in most cases.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6000,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPAMI.2025.3600033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Traditional classification problems assume that features and labels are fixed. However, this assumption is easily violated in open environments. For example, the exponential growth of web pages leads to an expanding feature space with the accumulation of keywords. At the same time, rapid refresh makes it difficult to obtain accurate labels for web pages, often resulting in rough annotations containing potentially correct labels, i.e., partial label set. In such cases, the coupling between the incremental feature space and the partial label set introduces more complex real-world challenges, which deserve attention but have not been fully explored. In this paper, we address this issue by introducing a novel incremental learning approach with Simultaneous Incremental Feature and Partial Label (SIFPL). SIFPL models the data evolution in dynamic and open environments in a two-stage way, consisting of a previous stage and an adapting stage, to deal with the associated challenges. Specifically, to ensure the reusability of the model during adaptation, we impose classifier consistency constraints to enhance the stability of the current model. This constraint leverages historical information from the previous stage to improve the generalization ability of the current model, providing a reliable foundation for further refining the model with new features. Regarding label disambiguation, we filter out incorrect candidate labels based on the principle of minimizing classifier loss, ensuring that the new features and labels effectively support the model's adaptation to the incremental feature space, thereby further refining its performance. Furthermore, we also provide a solid theoretical analysis of the model's generalization bounds, which can validate the efficiency of model inheritance. Experiments on benchmark and real-world datasets validate that the proposed method achieves better accuracy performance than the baseline methods in most cases.

基于增量特征和部分标签集的模型校正。
传统的分类问题假设特征和标签是固定的。然而,这个假设在开放环境中很容易被违背。例如,网页的指数增长导致特征空间随着关键词的积累而扩大。同时,快速刷新使得网页难以获得准确的标签,常常导致粗糙的注释包含可能正确的标签,即部分标签集。在这种情况下,增量特征空间和部分标签集之间的耦合引入了更复杂的现实挑战,值得关注,但尚未得到充分探讨。在本文中,我们通过引入一种具有同步增量特征和部分标签(SIFPL)的新颖增量学习方法来解决这个问题。SIFPL对动态开放环境下的数据演化进行了两阶段建模,即前一阶段和适应阶段,以应对相关挑战。具体来说,为了保证模型在自适应过程中的可重用性,我们施加了分类器一致性约束来增强当前模型的稳定性。该约束利用前一阶段的历史信息来提高当前模型的泛化能力,为使用新特征进一步细化模型提供可靠的基础。在标签消歧方面,我们基于最小化分类器损失的原则过滤掉不正确的候选标签,确保新的特征和标签有效地支持模型对增量特征空间的适应,从而进一步改进其性能。此外,我们还对模型的泛化边界进行了坚实的理论分析,验证了模型继承的有效性。在基准和实际数据集上的实验验证了该方法在大多数情况下比基线方法具有更好的精度性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信