ARDST: An Adversarial-Resilient Deep Symbolic Tree for Adversarial Learning

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Sheng Da Zhuo, Di Wu, Xin Hu, Yu Wang
{"title":"ARDST: An Adversarial-Resilient Deep Symbolic Tree for Adversarial Learning","authors":"Sheng Da Zhuo,&nbsp;Di Wu,&nbsp;Xin Hu,&nbsp;Yu Wang","doi":"10.1155/2024/2767008","DOIUrl":null,"url":null,"abstract":"<div>\n <p>The advancement of intelligent systems, particularly in domains such as natural language processing and autonomous driving, has been primarily driven by deep neural networks (DNNs). However, these systems exhibit vulnerability to adversarial attacks that can be both subtle and imperceptible to humans, resulting in arbitrary and erroneous decisions. This susceptibility arises from the hierarchical layer-by-layer learning structure of DNNs, where small distortions can be exponentially amplified. While several defense methods have been proposed, they often necessitate prior knowledge of adversarial attacks to design specific defense strategies. This requirement is often unfeasible in real-world attack scenarios. In this paper, we introduce a novel learning model, termed “immune” learning, known as adversarial-resilient deep symbolic tree (ARDST), from a neurosymbolic perspective. The ARDST model is semiparametric and takes the form of a tree, with logic operators serving as nodes and learned parameters as weights of edges. This model provides a transparent reasoning path for decision-making, offering fine granularity, and has the capacity to withstand various types of adversarial attacks, all while maintaining a significantly smaller parameter space compared to DNNs. Our extensive experiments, conducted on three benchmark datasets, reveal that ARDST exhibits a representation learning capability similar to DNNs in perceptual tasks and demonstrates resilience against state-of-the-art adversarial attacks.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/2767008","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/2767008","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The advancement of intelligent systems, particularly in domains such as natural language processing and autonomous driving, has been primarily driven by deep neural networks (DNNs). However, these systems exhibit vulnerability to adversarial attacks that can be both subtle and imperceptible to humans, resulting in arbitrary and erroneous decisions. This susceptibility arises from the hierarchical layer-by-layer learning structure of DNNs, where small distortions can be exponentially amplified. While several defense methods have been proposed, they often necessitate prior knowledge of adversarial attacks to design specific defense strategies. This requirement is often unfeasible in real-world attack scenarios. In this paper, we introduce a novel learning model, termed “immune” learning, known as adversarial-resilient deep symbolic tree (ARDST), from a neurosymbolic perspective. The ARDST model is semiparametric and takes the form of a tree, with logic operators serving as nodes and learned parameters as weights of edges. This model provides a transparent reasoning path for decision-making, offering fine granularity, and has the capacity to withstand various types of adversarial attacks, all while maintaining a significantly smaller parameter space compared to DNNs. Our extensive experiments, conducted on three benchmark datasets, reveal that ARDST exhibits a representation learning capability similar to DNNs in perceptual tasks and demonstrates resilience against state-of-the-art adversarial attacks.

Abstract Image

ARDST:用于对抗性学习的对抗弹性深度符号树
智能系统的进步,尤其是在自然语言处理和自动驾驶等领域的进步,主要是由深度神经网络(DNN)推动的。然而,这些系统容易受到人类难以察觉的对抗性攻击,从而导致任意和错误的决策。这种易受攻击性源于 DNN 的分层逐层学习结构,在这种结构中,微小的失真会以指数形式放大。虽然已经提出了几种防御方法,但这些方法往往需要事先了解对抗性攻击,才能设计出特定的防御策略。在现实世界的攻击场景中,这一要求往往是不可行的。在本文中,我们从神经符号学的角度引入了一种新的学习模型,称为 "免疫 "学习,也就是抗对抗深度符号树(ARDST)。ARDST 模型是半参数的,采用树的形式,逻辑算子作为节点,学习参数作为边的权重。该模型为决策提供了透明的推理路径,具有精细的粒度,能够抵御各种类型的对抗性攻击,同时与 DNN 相比,参数空间明显更小。我们在三个基准数据集上进行了广泛的实验,结果表明 ARDST 在感知任务中表现出与 DNN 相似的表征学习能力,并能抵御最先进的对抗性攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Intelligent Systems
International Journal of Intelligent Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
14.30%
发文量
304
审稿时长
9 months
期刊介绍: The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信