Da Guo;Zhengjie Feng;Zhen Zhang;Fazlullah Khan;Chien-Ming Chen;Ruibin Bai;Marwan Omar;Saru Kumar
{"title":"逆向攻击对 6G 消费电子产品中人工智能模型的因果影响","authors":"Da Guo;Zhengjie Feng;Zhen Zhang;Fazlullah Khan;Chien-Ming Chen;Ruibin Bai;Marwan Omar;Saru Kumar","doi":"10.1109/TCE.2024.3443328","DOIUrl":null,"url":null,"abstract":"Adversarial examples are security risks in the implementation of artificial intelligence (AI) in 6G Consumer Electronics. Deep learning models are highly susceptible to adversarial attacks, and defense against such attacks is critical to the safety of 6G Consumer Electronics. However, there remains a lack of effective defensive mechanisms against adversarial attacks in the realm of deep learning. The primary issue lies in the fact that it is not yet understood how adversarial examples can deceive deep learning models. The potential operation mechanism of adversarial examples has not been fully explored, which constitutes a bottleneck in adversarial attack defense. This paper focuses on causality in adversarial examples such as combining the adversarial attack algorithms with the causal inference methods. Specifically, we will use a variety of adversarial attack algorithms to generate adversarial samples, and analyze the causal relationship between adversarial samples and original samples through causal inference. At the same time, we will compare and analyze the causal effect between them to reveal the mechanism and discover the reason of miscalculating. The expected contributions of this paper include: (1) Reveal the mechanism and influencing factors of counterattack, and provide theoretical support for the security of deep learning models; (2) Propose a defense strategy based on causal inference method to provide a practical method for the defense of deep learning models; (3) Provide new ideas and methods for adversarial attack defense in deep learning models.","PeriodicalId":13208,"journal":{"name":"IEEE Transactions on Consumer Electronics","volume":"70 3","pages":"5804-5813"},"PeriodicalIF":4.3000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Causal Effects of Adversarial Attacks on AI Models in 6G Consumer Electronics\",\"authors\":\"Da Guo;Zhengjie Feng;Zhen Zhang;Fazlullah Khan;Chien-Ming Chen;Ruibin Bai;Marwan Omar;Saru Kumar\",\"doi\":\"10.1109/TCE.2024.3443328\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial examples are security risks in the implementation of artificial intelligence (AI) in 6G Consumer Electronics. Deep learning models are highly susceptible to adversarial attacks, and defense against such attacks is critical to the safety of 6G Consumer Electronics. However, there remains a lack of effective defensive mechanisms against adversarial attacks in the realm of deep learning. The primary issue lies in the fact that it is not yet understood how adversarial examples can deceive deep learning models. The potential operation mechanism of adversarial examples has not been fully explored, which constitutes a bottleneck in adversarial attack defense. This paper focuses on causality in adversarial examples such as combining the adversarial attack algorithms with the causal inference methods. Specifically, we will use a variety of adversarial attack algorithms to generate adversarial samples, and analyze the causal relationship between adversarial samples and original samples through causal inference. At the same time, we will compare and analyze the causal effect between them to reveal the mechanism and discover the reason of miscalculating. The expected contributions of this paper include: (1) Reveal the mechanism and influencing factors of counterattack, and provide theoretical support for the security of deep learning models; (2) Propose a defense strategy based on causal inference method to provide a practical method for the defense of deep learning models; (3) Provide new ideas and methods for adversarial attack defense in deep learning models.\",\"PeriodicalId\":13208,\"journal\":{\"name\":\"IEEE Transactions on Consumer Electronics\",\"volume\":\"70 3\",\"pages\":\"5804-5813\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Consumer Electronics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10648590/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Consumer Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10648590/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Causal Effects of Adversarial Attacks on AI Models in 6G Consumer Electronics
Adversarial examples are security risks in the implementation of artificial intelligence (AI) in 6G Consumer Electronics. Deep learning models are highly susceptible to adversarial attacks, and defense against such attacks is critical to the safety of 6G Consumer Electronics. However, there remains a lack of effective defensive mechanisms against adversarial attacks in the realm of deep learning. The primary issue lies in the fact that it is not yet understood how adversarial examples can deceive deep learning models. The potential operation mechanism of adversarial examples has not been fully explored, which constitutes a bottleneck in adversarial attack defense. This paper focuses on causality in adversarial examples such as combining the adversarial attack algorithms with the causal inference methods. Specifically, we will use a variety of adversarial attack algorithms to generate adversarial samples, and analyze the causal relationship between adversarial samples and original samples through causal inference. At the same time, we will compare and analyze the causal effect between them to reveal the mechanism and discover the reason of miscalculating. The expected contributions of this paper include: (1) Reveal the mechanism and influencing factors of counterattack, and provide theoretical support for the security of deep learning models; (2) Propose a defense strategy based on causal inference method to provide a practical method for the defense of deep learning models; (3) Provide new ideas and methods for adversarial attack defense in deep learning models.
期刊介绍:
The main focus for the IEEE Transactions on Consumer Electronics is the engineering and research aspects of the theory, design, construction, manufacture or end use of mass market electronics, systems, software and services for consumers.