{"title":"基于简化自集成学习的图像分类领域泛化。","authors":"Zhenkai Qin, Xinlu Guo, Jun Li, Yue Chen","doi":"10.1371/journal.pone.0320300","DOIUrl":null,"url":null,"abstract":"<p><p>Domain generalization seeks to acquire knowledge from limited source data and apply it to an unknown target domain. Current approaches primarily tackle this challenge by attempting to eliminate the differences between domains. However, as cross-domain data evolves, the discrepancies between domains grow increasingly intricate and difficult to manage, rendering effective knowledge transfer across multiple domains a persistent challenge. While existing methods concentrate on minimizing domain discrepancies, they frequently encounter difficulties in maintaining effectiveness when confronted with high data complexity. In this paper, we present an approach that transcends merely eliminating domain discrepancies by enhancing the model's adaptability to improve its performance in unseen domains. Specifically, we frame the problem as an optimization process with the objective of minimizing a weighted loss function that balances cross-domain discrepancies and sample complexity. Our proposed self-ensemble learning framework, which utilizes a single feature extractor, simplifies this process by alternately training multiple classifiers with shared feature extractors. The introduction of focal loss and complex sample loss weight further fine-tunes the model's sensitivity to hard-to-learn instances, enhancing generalization to difficult samples. Finally, a dynamic loss adaptive weighted voting strategy ensures more accurate predictions across diverse domains. Experimental results on three public benchmark datasets (OfficeHome, PACS, and VLCS) demonstrate that our proposed algorithm achieves an improvement of up to 3 . 38% over existing methods in terms of generalization performance, particularly in complex and diverse real-world scenarios, such as autonomous driving and medical image analysis. These results highlight the practical utility of our approach in environments where cross-domain generalization is crucial for system reliability and safety.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 4","pages":"e0320300"},"PeriodicalIF":2.6000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11970687/pdf/","citationCount":"0","resultStr":"{\"title\":\"Domain generalization for image classification based on simplified self ensemble learning.\",\"authors\":\"Zhenkai Qin, Xinlu Guo, Jun Li, Yue Chen\",\"doi\":\"10.1371/journal.pone.0320300\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Domain generalization seeks to acquire knowledge from limited source data and apply it to an unknown target domain. Current approaches primarily tackle this challenge by attempting to eliminate the differences between domains. However, as cross-domain data evolves, the discrepancies between domains grow increasingly intricate and difficult to manage, rendering effective knowledge transfer across multiple domains a persistent challenge. While existing methods concentrate on minimizing domain discrepancies, they frequently encounter difficulties in maintaining effectiveness when confronted with high data complexity. In this paper, we present an approach that transcends merely eliminating domain discrepancies by enhancing the model's adaptability to improve its performance in unseen domains. Specifically, we frame the problem as an optimization process with the objective of minimizing a weighted loss function that balances cross-domain discrepancies and sample complexity. Our proposed self-ensemble learning framework, which utilizes a single feature extractor, simplifies this process by alternately training multiple classifiers with shared feature extractors. The introduction of focal loss and complex sample loss weight further fine-tunes the model's sensitivity to hard-to-learn instances, enhancing generalization to difficult samples. Finally, a dynamic loss adaptive weighted voting strategy ensures more accurate predictions across diverse domains. Experimental results on three public benchmark datasets (OfficeHome, PACS, and VLCS) demonstrate that our proposed algorithm achieves an improvement of up to 3 . 38% over existing methods in terms of generalization performance, particularly in complex and diverse real-world scenarios, such as autonomous driving and medical image analysis. These results highlight the practical utility of our approach in environments where cross-domain generalization is crucial for system reliability and safety.</p>\",\"PeriodicalId\":20189,\"journal\":{\"name\":\"PLoS ONE\",\"volume\":\"20 4\",\"pages\":\"e0320300\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-04-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11970687/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLoS ONE\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pone.0320300\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0320300","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
Domain generalization for image classification based on simplified self ensemble learning.
Domain generalization seeks to acquire knowledge from limited source data and apply it to an unknown target domain. Current approaches primarily tackle this challenge by attempting to eliminate the differences between domains. However, as cross-domain data evolves, the discrepancies between domains grow increasingly intricate and difficult to manage, rendering effective knowledge transfer across multiple domains a persistent challenge. While existing methods concentrate on minimizing domain discrepancies, they frequently encounter difficulties in maintaining effectiveness when confronted with high data complexity. In this paper, we present an approach that transcends merely eliminating domain discrepancies by enhancing the model's adaptability to improve its performance in unseen domains. Specifically, we frame the problem as an optimization process with the objective of minimizing a weighted loss function that balances cross-domain discrepancies and sample complexity. Our proposed self-ensemble learning framework, which utilizes a single feature extractor, simplifies this process by alternately training multiple classifiers with shared feature extractors. The introduction of focal loss and complex sample loss weight further fine-tunes the model's sensitivity to hard-to-learn instances, enhancing generalization to difficult samples. Finally, a dynamic loss adaptive weighted voting strategy ensures more accurate predictions across diverse domains. Experimental results on three public benchmark datasets (OfficeHome, PACS, and VLCS) demonstrate that our proposed algorithm achieves an improvement of up to 3 . 38% over existing methods in terms of generalization performance, particularly in complex and diverse real-world scenarios, such as autonomous driving and medical image analysis. These results highlight the practical utility of our approach in environments where cross-domain generalization is crucial for system reliability and safety.
期刊介绍:
PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides:
* Open-access—freely accessible online, authors retain copyright
* Fast publication times
* Peer review by expert, practicing researchers
* Post-publication tools to indicate quality and impact
* Community-based dialogue on articles
* Worldwide media coverage