{"title":"ARDST:用于对抗性学习的对抗弹性深度符号树","authors":"Sheng Da Zhuo, Di Wu, Xin Hu, Yu Wang","doi":"10.1155/2024/2767008","DOIUrl":null,"url":null,"abstract":"<div>\n <p>The advancement of intelligent systems, particularly in domains such as natural language processing and autonomous driving, has been primarily driven by deep neural networks (DNNs). However, these systems exhibit vulnerability to adversarial attacks that can be both subtle and imperceptible to humans, resulting in arbitrary and erroneous decisions. This susceptibility arises from the hierarchical layer-by-layer learning structure of DNNs, where small distortions can be exponentially amplified. While several defense methods have been proposed, they often necessitate prior knowledge of adversarial attacks to design specific defense strategies. This requirement is often unfeasible in real-world attack scenarios. In this paper, we introduce a novel learning model, termed “immune” learning, known as adversarial-resilient deep symbolic tree (ARDST), from a neurosymbolic perspective. The ARDST model is semiparametric and takes the form of a tree, with logic operators serving as nodes and learned parameters as weights of edges. This model provides a transparent reasoning path for decision-making, offering fine granularity, and has the capacity to withstand various types of adversarial attacks, all while maintaining a significantly smaller parameter space compared to DNNs. Our extensive experiments, conducted on three benchmark datasets, reveal that ARDST exhibits a representation learning capability similar to DNNs in perceptual tasks and demonstrates resilience against state-of-the-art adversarial attacks.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/2767008","citationCount":"0","resultStr":"{\"title\":\"ARDST: An Adversarial-Resilient Deep Symbolic Tree for Adversarial Learning\",\"authors\":\"Sheng Da Zhuo, Di Wu, Xin Hu, Yu Wang\",\"doi\":\"10.1155/2024/2767008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n <p>The advancement of intelligent systems, particularly in domains such as natural language processing and autonomous driving, has been primarily driven by deep neural networks (DNNs). However, these systems exhibit vulnerability to adversarial attacks that can be both subtle and imperceptible to humans, resulting in arbitrary and erroneous decisions. This susceptibility arises from the hierarchical layer-by-layer learning structure of DNNs, where small distortions can be exponentially amplified. While several defense methods have been proposed, they often necessitate prior knowledge of adversarial attacks to design specific defense strategies. This requirement is often unfeasible in real-world attack scenarios. In this paper, we introduce a novel learning model, termed “immune” learning, known as adversarial-resilient deep symbolic tree (ARDST), from a neurosymbolic perspective. The ARDST model is semiparametric and takes the form of a tree, with logic operators serving as nodes and learned parameters as weights of edges. This model provides a transparent reasoning path for decision-making, offering fine granularity, and has the capacity to withstand various types of adversarial attacks, all while maintaining a significantly smaller parameter space compared to DNNs. Our extensive experiments, conducted on three benchmark datasets, reveal that ARDST exhibits a representation learning capability similar to DNNs in perceptual tasks and demonstrates resilience against state-of-the-art adversarial attacks.</p>\\n </div>\",\"PeriodicalId\":14089,\"journal\":{\"name\":\"International Journal of Intelligent Systems\",\"volume\":\"2024 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/2767008\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/2024/2767008\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/2767008","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
ARDST: An Adversarial-Resilient Deep Symbolic Tree for Adversarial Learning
The advancement of intelligent systems, particularly in domains such as natural language processing and autonomous driving, has been primarily driven by deep neural networks (DNNs). However, these systems exhibit vulnerability to adversarial attacks that can be both subtle and imperceptible to humans, resulting in arbitrary and erroneous decisions. This susceptibility arises from the hierarchical layer-by-layer learning structure of DNNs, where small distortions can be exponentially amplified. While several defense methods have been proposed, they often necessitate prior knowledge of adversarial attacks to design specific defense strategies. This requirement is often unfeasible in real-world attack scenarios. In this paper, we introduce a novel learning model, termed “immune” learning, known as adversarial-resilient deep symbolic tree (ARDST), from a neurosymbolic perspective. The ARDST model is semiparametric and takes the form of a tree, with logic operators serving as nodes and learned parameters as weights of edges. This model provides a transparent reasoning path for decision-making, offering fine granularity, and has the capacity to withstand various types of adversarial attacks, all while maintaining a significantly smaller parameter space compared to DNNs. Our extensive experiments, conducted on three benchmark datasets, reveal that ARDST exhibits a representation learning capability similar to DNNs in perceptual tasks and demonstrates resilience against state-of-the-art adversarial attacks.
期刊介绍:
The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.