{"title":"对抗鲁棒无监督域自适应","authors":"Lianghe Shi, Weiwei Liu","doi":"10.1016/j.artint.2025.104383","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised domain adaptation (UDA) has been successfully applied in many contexts with domain shifts. However, we find that existing UDA methods are vulnerable to adversarial attacks. A direct modification of the existing UDA methods to improve adversarial robustness is to feed the algorithms with adversarial source examples. However, empirical results show that traditional discrepancy fails to measure the distance between adversarial examples, leading to poor alignment between adversarial examples of source and target domains and inefficient transfer of the robustness from source domain to target domain. And the traditional theoretical bounds do not always hold in adversarial scenarios. Accordingly, we first propose a novel adversarial discrepancy (AD) to narrow the gap between adversarial robustness and UDA. Based on AD, this paper provides a generalization error bound for adversarially robust unsupervised domain adaptation through the lens of Rademacher complexity, theoretically demonstrating that the expected adversarial target error can be bounded by empirical adversarial source error and AD. We also present the upper bounds of Rademacher complexity, with a particular focus on linear models and multi-layer neural networks under <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mi>r</mi></mrow></msub></math></span> attack (<span><math><mi>r</mi><mo>≥</mo><mn>1</mn></math></span>). Inspired by this theory, we go on to develop an adversarially robust algorithm for UDA. We further conduct comprehensive experiments to support our theory and validate the robustness improvement of our proposed method on challenging domain adaptation tasks.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"347 ","pages":"Article 104383"},"PeriodicalIF":4.6000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarially robust unsupervised domain adaptation\",\"authors\":\"Lianghe Shi, Weiwei Liu\",\"doi\":\"10.1016/j.artint.2025.104383\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Unsupervised domain adaptation (UDA) has been successfully applied in many contexts with domain shifts. However, we find that existing UDA methods are vulnerable to adversarial attacks. A direct modification of the existing UDA methods to improve adversarial robustness is to feed the algorithms with adversarial source examples. However, empirical results show that traditional discrepancy fails to measure the distance between adversarial examples, leading to poor alignment between adversarial examples of source and target domains and inefficient transfer of the robustness from source domain to target domain. And the traditional theoretical bounds do not always hold in adversarial scenarios. Accordingly, we first propose a novel adversarial discrepancy (AD) to narrow the gap between adversarial robustness and UDA. Based on AD, this paper provides a generalization error bound for adversarially robust unsupervised domain adaptation through the lens of Rademacher complexity, theoretically demonstrating that the expected adversarial target error can be bounded by empirical adversarial source error and AD. We also present the upper bounds of Rademacher complexity, with a particular focus on linear models and multi-layer neural networks under <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mi>r</mi></mrow></msub></math></span> attack (<span><math><mi>r</mi><mo>≥</mo><mn>1</mn></math></span>). Inspired by this theory, we go on to develop an adversarially robust algorithm for UDA. We further conduct comprehensive experiments to support our theory and validate the robustness improvement of our proposed method on challenging domain adaptation tasks.</div></div>\",\"PeriodicalId\":8434,\"journal\":{\"name\":\"Artificial Intelligence\",\"volume\":\"347 \",\"pages\":\"Article 104383\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S000437022500102X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S000437022500102X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Unsupervised domain adaptation (UDA) has been successfully applied in many contexts with domain shifts. However, we find that existing UDA methods are vulnerable to adversarial attacks. A direct modification of the existing UDA methods to improve adversarial robustness is to feed the algorithms with adversarial source examples. However, empirical results show that traditional discrepancy fails to measure the distance between adversarial examples, leading to poor alignment between adversarial examples of source and target domains and inefficient transfer of the robustness from source domain to target domain. And the traditional theoretical bounds do not always hold in adversarial scenarios. Accordingly, we first propose a novel adversarial discrepancy (AD) to narrow the gap between adversarial robustness and UDA. Based on AD, this paper provides a generalization error bound for adversarially robust unsupervised domain adaptation through the lens of Rademacher complexity, theoretically demonstrating that the expected adversarial target error can be bounded by empirical adversarial source error and AD. We also present the upper bounds of Rademacher complexity, with a particular focus on linear models and multi-layer neural networks under attack (). Inspired by this theory, we go on to develop an adversarially robust algorithm for UDA. We further conduct comprehensive experiments to support our theory and validate the robustness improvement of our proposed method on challenging domain adaptation tasks.
期刊介绍:
The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.