航空图像的对抗性攻击:最新技术和视角

Syed M. Kazam Abbas Kazmi, Nayyer Aafaq, Mansoor Ahmad Khan, Ammar Saleem, Zahid Ali
{"title":"航空图像的对抗性攻击:最新技术和视角","authors":"Syed M. Kazam Abbas Kazmi, Nayyer Aafaq, Mansoor Ahmad Khan, Ammar Saleem, Zahid Ali","doi":"10.1109/ICAI58407.2023.10136660","DOIUrl":null,"url":null,"abstract":"In recent years, deep model's feature learning skills have become more compelling, resulting in huge advancements in various artificial intelligence (AI) applications. Specifically, depth and breadth of Computer Vision (CV) have expanded rapidly considering the usage of Deep Neural Networks (DNNs). However, it has been shown in the literature that DNNs are vulnerable to adversarial attacks caused by carefully crafted perturbations through solving complex optimization problems. Although the attacks reveal weaknesses in sophisticated DNN algorithms, they might be seen as an opportunity to address issues in real-world security-critical applications. These attacks represent a paradigm change for circumstances in which vulnerable assets must be concealed from autonomous detection systems onboard drones, Unmanned Aerial Vehicles (UAVs), and satellites. Flying AI-models with strong remote detection and classification capabilities may relay exact target-object kinds on the ground, compromising victim security. The employment of conventional tactics to hide huge stationary and movable assets from autonomous aerial detection has become ineffective for larger areas owing to its cost and applicability. Previous works have explained the broader perspective of adversarial attacks in both digital and physical domains. This is the first effort to characterize the multiplicity of adversarial attacks from the viewpoint of autonomous aerial imaging. In addition to providing a thorough literature review of adversarial attacks on aerial imagery in CV tasks, this paper also offers non-specialists succinct descriptions of technical terms and prospects associated with this direction of study.","PeriodicalId":161809,"journal":{"name":"2023 3rd International Conference on Artificial Intelligence (ICAI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Adversarial Attacks on Aerial Imagery : The State-of-the-Art and Perspective\",\"authors\":\"Syed M. Kazam Abbas Kazmi, Nayyer Aafaq, Mansoor Ahmad Khan, Ammar Saleem, Zahid Ali\",\"doi\":\"10.1109/ICAI58407.2023.10136660\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, deep model's feature learning skills have become more compelling, resulting in huge advancements in various artificial intelligence (AI) applications. Specifically, depth and breadth of Computer Vision (CV) have expanded rapidly considering the usage of Deep Neural Networks (DNNs). However, it has been shown in the literature that DNNs are vulnerable to adversarial attacks caused by carefully crafted perturbations through solving complex optimization problems. Although the attacks reveal weaknesses in sophisticated DNN algorithms, they might be seen as an opportunity to address issues in real-world security-critical applications. These attacks represent a paradigm change for circumstances in which vulnerable assets must be concealed from autonomous detection systems onboard drones, Unmanned Aerial Vehicles (UAVs), and satellites. Flying AI-models with strong remote detection and classification capabilities may relay exact target-object kinds on the ground, compromising victim security. The employment of conventional tactics to hide huge stationary and movable assets from autonomous aerial detection has become ineffective for larger areas owing to its cost and applicability. Previous works have explained the broader perspective of adversarial attacks in both digital and physical domains. This is the first effort to characterize the multiplicity of adversarial attacks from the viewpoint of autonomous aerial imaging. In addition to providing a thorough literature review of adversarial attacks on aerial imagery in CV tasks, this paper also offers non-specialists succinct descriptions of technical terms and prospects associated with this direction of study.\",\"PeriodicalId\":161809,\"journal\":{\"name\":\"2023 3rd International Conference on Artificial Intelligence (ICAI)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 3rd International Conference on Artificial Intelligence (ICAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAI58407.2023.10136660\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 3rd International Conference on Artificial Intelligence (ICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAI58407.2023.10136660","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

近年来,深度模型的特征学习技能越来越引人注目,在各种人工智能(AI)应用中取得了巨大的进步。具体来说,考虑到深度神经网络(dnn)的使用,计算机视觉(CV)的深度和广度得到了迅速扩展。然而,文献表明,通过解决复杂的优化问题,dnn容易受到精心设计的扰动引起的对抗性攻击。尽管这些攻击暴露了复杂的深度神经网络算法的弱点,但它们可能被视为解决现实世界安全关键应用问题的机会。这些攻击代表了一种范式变化,在这种情况下,脆弱的资产必须隐藏在无人机、无人驾驶飞行器(uav)和卫星上的自主探测系统中。具有较强远程探测和分类能力的飞行人工智能模型可能会在地面上中继精确的目标物类型,从而危及受害者的安全。由于成本和适用性的原因,采用传统的战术来隐藏大型固定和移动资产以躲避自主航空侦察已经在更大的区域内变得无效。以前的工作已经解释了数字和物理领域对抗性攻击的更广泛视角。这是第一次从自主航空成像的角度来描述对抗性攻击的多样性。除了对CV任务中针对航空图像的对抗性攻击进行全面的文献综述外,本文还为非专业人士提供了与该研究方向相关的技术术语和前景的简明描述。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Attacks on Aerial Imagery : The State-of-the-Art and Perspective
In recent years, deep model's feature learning skills have become more compelling, resulting in huge advancements in various artificial intelligence (AI) applications. Specifically, depth and breadth of Computer Vision (CV) have expanded rapidly considering the usage of Deep Neural Networks (DNNs). However, it has been shown in the literature that DNNs are vulnerable to adversarial attacks caused by carefully crafted perturbations through solving complex optimization problems. Although the attacks reveal weaknesses in sophisticated DNN algorithms, they might be seen as an opportunity to address issues in real-world security-critical applications. These attacks represent a paradigm change for circumstances in which vulnerable assets must be concealed from autonomous detection systems onboard drones, Unmanned Aerial Vehicles (UAVs), and satellites. Flying AI-models with strong remote detection and classification capabilities may relay exact target-object kinds on the ground, compromising victim security. The employment of conventional tactics to hide huge stationary and movable assets from autonomous aerial detection has become ineffective for larger areas owing to its cost and applicability. Previous works have explained the broader perspective of adversarial attacks in both digital and physical domains. This is the first effort to characterize the multiplicity of adversarial attacks from the viewpoint of autonomous aerial imaging. In addition to providing a thorough literature review of adversarial attacks on aerial imagery in CV tasks, this paper also offers non-specialists succinct descriptions of technical terms and prospects associated with this direction of study.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信