基于简化自集成学习的图像分类领域泛化。

IF 2.6 3区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
PLoS ONE Pub Date : 2025-04-04 eCollection Date: 2025-01-01 DOI:10.1371/journal.pone.0320300
Zhenkai Qin, Xinlu Guo, Jun Li, Yue Chen
{"title":"基于简化自集成学习的图像分类领域泛化。","authors":"Zhenkai Qin, Xinlu Guo, Jun Li, Yue Chen","doi":"10.1371/journal.pone.0320300","DOIUrl":null,"url":null,"abstract":"<p><p>Domain generalization seeks to acquire knowledge from limited source data and apply it to an unknown target domain. Current approaches primarily tackle this challenge by attempting to eliminate the differences between domains. However, as cross-domain data evolves, the discrepancies between domains grow increasingly intricate and difficult to manage, rendering effective knowledge transfer across multiple domains a persistent challenge. While existing methods concentrate on minimizing domain discrepancies, they frequently encounter difficulties in maintaining effectiveness when confronted with high data complexity. In this paper, we present an approach that transcends merely eliminating domain discrepancies by enhancing the model's adaptability to improve its performance in unseen domains. Specifically, we frame the problem as an optimization process with the objective of minimizing a weighted loss function that balances cross-domain discrepancies and sample complexity. Our proposed self-ensemble learning framework, which utilizes a single feature extractor, simplifies this process by alternately training multiple classifiers with shared feature extractors. The introduction of focal loss and complex sample loss weight further fine-tunes the model's sensitivity to hard-to-learn instances, enhancing generalization to difficult samples. Finally, a dynamic loss adaptive weighted voting strategy ensures more accurate predictions across diverse domains. Experimental results on three public benchmark datasets (OfficeHome, PACS, and VLCS) demonstrate that our proposed algorithm achieves an improvement of up to 3 . 38% over existing methods in terms of generalization performance, particularly in complex and diverse real-world scenarios, such as autonomous driving and medical image analysis. These results highlight the practical utility of our approach in environments where cross-domain generalization is crucial for system reliability and safety.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 4","pages":"e0320300"},"PeriodicalIF":2.6000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11970687/pdf/","citationCount":"0","resultStr":"{\"title\":\"Domain generalization for image classification based on simplified self ensemble learning.\",\"authors\":\"Zhenkai Qin, Xinlu Guo, Jun Li, Yue Chen\",\"doi\":\"10.1371/journal.pone.0320300\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Domain generalization seeks to acquire knowledge from limited source data and apply it to an unknown target domain. Current approaches primarily tackle this challenge by attempting to eliminate the differences between domains. However, as cross-domain data evolves, the discrepancies between domains grow increasingly intricate and difficult to manage, rendering effective knowledge transfer across multiple domains a persistent challenge. While existing methods concentrate on minimizing domain discrepancies, they frequently encounter difficulties in maintaining effectiveness when confronted with high data complexity. In this paper, we present an approach that transcends merely eliminating domain discrepancies by enhancing the model's adaptability to improve its performance in unseen domains. Specifically, we frame the problem as an optimization process with the objective of minimizing a weighted loss function that balances cross-domain discrepancies and sample complexity. Our proposed self-ensemble learning framework, which utilizes a single feature extractor, simplifies this process by alternately training multiple classifiers with shared feature extractors. The introduction of focal loss and complex sample loss weight further fine-tunes the model's sensitivity to hard-to-learn instances, enhancing generalization to difficult samples. Finally, a dynamic loss adaptive weighted voting strategy ensures more accurate predictions across diverse domains. Experimental results on three public benchmark datasets (OfficeHome, PACS, and VLCS) demonstrate that our proposed algorithm achieves an improvement of up to 3 . 38% over existing methods in terms of generalization performance, particularly in complex and diverse real-world scenarios, such as autonomous driving and medical image analysis. These results highlight the practical utility of our approach in environments where cross-domain generalization is crucial for system reliability and safety.</p>\",\"PeriodicalId\":20189,\"journal\":{\"name\":\"PLoS ONE\",\"volume\":\"20 4\",\"pages\":\"e0320300\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-04-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11970687/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLoS ONE\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pone.0320300\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0320300","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

领域泛化旨在从有限的源数据中获取知识,并将其应用于未知的目标领域。目前的方法主要是通过试图消除域之间的差异来解决这一挑战。然而,随着跨领域数据的发展,领域之间的差异变得越来越复杂和难以管理,使得跨领域的有效知识转移成为一个持续的挑战。虽然现有的方法集中于最小化域差异,但当面对高数据复杂性时,它们经常在保持有效性方面遇到困难。在本文中,我们提出了一种超越仅仅消除领域差异的方法,通过增强模型的适应性来提高其在未知领域的性能。具体来说,我们将问题构建为一个优化过程,其目标是最小化加权损失函数,以平衡跨域差异和样本复杂性。我们提出的自集成学习框架利用单个特征提取器,通过使用共享特征提取器交替训练多个分类器来简化这一过程。焦点损失和复杂样本损失权重的引入进一步微调了模型对难以学习的实例的敏感性,增强了对困难样本的泛化。最后,动态损失自适应加权投票策略确保在不同领域进行更准确的预测。在三个公共基准数据集(OfficeHome、PACS和VLCS)上的实验结果表明,我们提出的算法实现了高达3倍的改进。在泛化性能方面比现有方法提高38%,特别是在复杂多样的现实场景中,如自动驾驶和医学图像分析。这些结果突出了我们的方法在跨域泛化对系统可靠性和安全性至关重要的环境中的实际效用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Domain generalization for image classification based on simplified self ensemble learning.

Domain generalization for image classification based on simplified self ensemble learning.

Domain generalization for image classification based on simplified self ensemble learning.

Domain generalization for image classification based on simplified self ensemble learning.

Domain generalization seeks to acquire knowledge from limited source data and apply it to an unknown target domain. Current approaches primarily tackle this challenge by attempting to eliminate the differences between domains. However, as cross-domain data evolves, the discrepancies between domains grow increasingly intricate and difficult to manage, rendering effective knowledge transfer across multiple domains a persistent challenge. While existing methods concentrate on minimizing domain discrepancies, they frequently encounter difficulties in maintaining effectiveness when confronted with high data complexity. In this paper, we present an approach that transcends merely eliminating domain discrepancies by enhancing the model's adaptability to improve its performance in unseen domains. Specifically, we frame the problem as an optimization process with the objective of minimizing a weighted loss function that balances cross-domain discrepancies and sample complexity. Our proposed self-ensemble learning framework, which utilizes a single feature extractor, simplifies this process by alternately training multiple classifiers with shared feature extractors. The introduction of focal loss and complex sample loss weight further fine-tunes the model's sensitivity to hard-to-learn instances, enhancing generalization to difficult samples. Finally, a dynamic loss adaptive weighted voting strategy ensures more accurate predictions across diverse domains. Experimental results on three public benchmark datasets (OfficeHome, PACS, and VLCS) demonstrate that our proposed algorithm achieves an improvement of up to 3 . 38% over existing methods in terms of generalization performance, particularly in complex and diverse real-world scenarios, such as autonomous driving and medical image analysis. These results highlight the practical utility of our approach in environments where cross-domain generalization is crucial for system reliability and safety.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
PLoS ONE
PLoS ONE 生物-生物学
CiteScore
6.20
自引率
5.40%
发文量
14242
审稿时长
3.7 months
期刊介绍: PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides: * Open-access—freely accessible online, authors retain copyright * Fast publication times * Peer review by expert, practicing researchers * Post-publication tools to indicate quality and impact * Community-based dialogue on articles * Worldwide media coverage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信