农业人工智能面临的安全威胁:立场和观点

IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY
Yansong Gao , Seyit A. Camtepe , Nazatul Haque Sultan , Hang Thanh Bui , Arash Mahboubi , Hamed Aboutorab , Michael Bewong , Rafiqul Islam , Md Zahidul Islam , Aufeef Chauhan , Praveen Gauravaram , Dineshkumar Singh
{"title":"农业人工智能面临的安全威胁:立场和观点","authors":"Yansong Gao ,&nbsp;Seyit A. Camtepe ,&nbsp;Nazatul Haque Sultan ,&nbsp;Hang Thanh Bui ,&nbsp;Arash Mahboubi ,&nbsp;Hamed Aboutorab ,&nbsp;Michael Bewong ,&nbsp;Rafiqul Islam ,&nbsp;Md Zahidul Islam ,&nbsp;Aufeef Chauhan ,&nbsp;Praveen Gauravaram ,&nbsp;Dineshkumar Singh","doi":"10.1016/j.compag.2024.109557","DOIUrl":null,"url":null,"abstract":"<div><div>In light of their remarkable predictive capabilities, artificial intelligence (AI) models driven by deep learning (DL) have witnessed widespread adoption in the agriculture sector, contributing to diverse applications such as enhancing crop management and agricultural productivity. Despite their evident benefits, the integration of AI in agriculture brings forth security risks, a concern further exacerbated by the comparatively lower security awareness among agriculture stakeholders. This position paper endeavors to amplify the security consciousness among stakeholders (e.g., end-users such as farmers and governmental bodies) engaged in the implementation of AI systems within the agricultural sector. In our systematic categorization of security threats to AI systems, we delineate three primary avenues of attack: (1) Adversarial Example Attacks, (2) Poisoning Attacks, and (3) Backdoor Attacks. Adversarial example attacks manipulate inputs during the model’s inference phase to induce incorrect predictions. Poisoning attacks corrupt the training data, compromising the model’s availability by indiscriminately degrading its performance on legitimate inputs. Backdoor attacks, typically introduced during the training phase, undermine the model’s integrity, enabling attackers to trigger specific behaviors or outputs through predetermined hidden patterns. An example of compromising AI integrity for malicious purposes is DeepLocker, highlighted by IBM researchers. A detailed examination of attacks in each category is provided, emphasizing their tangible threats to real-world agricultural applications. To illustrate the practical implications, we conduct case studies on specific agricultural applications, focusing on precise irrigation schedules and plant disease detection, utilizing authentic agricultural datasets. Comprehensive countermeasures against each attack type are presented to assist agriculture stakeholders in actively safeguarding their AI applications. Additionally, we address challenges inherent in securing agriculture AI and offer our perspectives on mitigating security threats in this context. This work aims to equip agriculture stakeholders with the knowledge and tools necessary to fortify their AI systems against evolving security challenges. The artifacts of this work are released at <span><span>https://github.com/garrisongys/Casestudy</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7000,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Security threats to agricultural artificial intelligence: Position and perspective\",\"authors\":\"Yansong Gao ,&nbsp;Seyit A. Camtepe ,&nbsp;Nazatul Haque Sultan ,&nbsp;Hang Thanh Bui ,&nbsp;Arash Mahboubi ,&nbsp;Hamed Aboutorab ,&nbsp;Michael Bewong ,&nbsp;Rafiqul Islam ,&nbsp;Md Zahidul Islam ,&nbsp;Aufeef Chauhan ,&nbsp;Praveen Gauravaram ,&nbsp;Dineshkumar Singh\",\"doi\":\"10.1016/j.compag.2024.109557\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In light of their remarkable predictive capabilities, artificial intelligence (AI) models driven by deep learning (DL) have witnessed widespread adoption in the agriculture sector, contributing to diverse applications such as enhancing crop management and agricultural productivity. Despite their evident benefits, the integration of AI in agriculture brings forth security risks, a concern further exacerbated by the comparatively lower security awareness among agriculture stakeholders. This position paper endeavors to amplify the security consciousness among stakeholders (e.g., end-users such as farmers and governmental bodies) engaged in the implementation of AI systems within the agricultural sector. In our systematic categorization of security threats to AI systems, we delineate three primary avenues of attack: (1) Adversarial Example Attacks, (2) Poisoning Attacks, and (3) Backdoor Attacks. Adversarial example attacks manipulate inputs during the model’s inference phase to induce incorrect predictions. Poisoning attacks corrupt the training data, compromising the model’s availability by indiscriminately degrading its performance on legitimate inputs. Backdoor attacks, typically introduced during the training phase, undermine the model’s integrity, enabling attackers to trigger specific behaviors or outputs through predetermined hidden patterns. An example of compromising AI integrity for malicious purposes is DeepLocker, highlighted by IBM researchers. A detailed examination of attacks in each category is provided, emphasizing their tangible threats to real-world agricultural applications. To illustrate the practical implications, we conduct case studies on specific agricultural applications, focusing on precise irrigation schedules and plant disease detection, utilizing authentic agricultural datasets. Comprehensive countermeasures against each attack type are presented to assist agriculture stakeholders in actively safeguarding their AI applications. Additionally, we address challenges inherent in securing agriculture AI and offer our perspectives on mitigating security threats in this context. This work aims to equip agriculture stakeholders with the knowledge and tools necessary to fortify their AI systems against evolving security challenges. The artifacts of this work are released at <span><span>https://github.com/garrisongys/Casestudy</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50627,\"journal\":{\"name\":\"Computers and Electronics in Agriculture\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.7000,\"publicationDate\":\"2024-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers and Electronics in Agriculture\",\"FirstCategoryId\":\"97\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0168169924009487\",\"RegionNum\":1,\"RegionCategory\":\"农林科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURE, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169924009487","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

鉴于其卓越的预测能力,由深度学习(DL)驱动的人工智能(AI)模型已在农业领域得到广泛应用,有助于提高作物管理和农业生产力等多种应用。尽管人工智能具有显而易见的优势,但其与农业的结合也带来了安全风险,而农业利益相关者相对较低的安全意识则进一步加剧了这种担忧。本立场文件旨在提高参与在农业部门实施人工智能系统的利益相关者(如农民和政府机构等最终用户)的安全意识。我们对人工智能系统面临的安全威胁进行了系统分类,划分出三种主要攻击途径:(1) 对抗性示例攻击;(2) 中毒攻击;(3) 后门攻击。对抗性示例攻击会在模型推理阶段操纵输入,从而诱发不正确的预测。中毒攻击会破坏训练数据,不加区分地降低模型在合法输入上的性能,从而影响模型的可用性。通常在训练阶段引入的后门攻击会破坏模型的完整性,使攻击者能够通过预定的隐藏模式触发特定行为或输出。IBM 研究人员强调的 DeepLocker 就是出于恶意目的破坏人工智能完整性的一个例子。我们对每一类攻击都进行了详细研究,强调了它们对现实世界农业应用的实际威胁。为了说明实际影响,我们利用真实的农业数据集对特定的农业应用进行了案例研究,重点是精确灌溉计划和植物病害检测。我们针对每种攻击类型提出了全面的应对措施,以帮助农业利益相关者积极保护其人工智能应用。此外,我们还探讨了确保农业人工智能安全所面临的固有挑战,并提出了我们在此背景下减轻安全威胁的观点。这项工作旨在为农业利益相关者提供必要的知识和工具,以强化其人工智能系统,应对不断变化的安全挑战。这项工作的成果发布于 https://github.com/garrisongys/Casestudy。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Security threats to agricultural artificial intelligence: Position and perspective
In light of their remarkable predictive capabilities, artificial intelligence (AI) models driven by deep learning (DL) have witnessed widespread adoption in the agriculture sector, contributing to diverse applications such as enhancing crop management and agricultural productivity. Despite their evident benefits, the integration of AI in agriculture brings forth security risks, a concern further exacerbated by the comparatively lower security awareness among agriculture stakeholders. This position paper endeavors to amplify the security consciousness among stakeholders (e.g., end-users such as farmers and governmental bodies) engaged in the implementation of AI systems within the agricultural sector. In our systematic categorization of security threats to AI systems, we delineate three primary avenues of attack: (1) Adversarial Example Attacks, (2) Poisoning Attacks, and (3) Backdoor Attacks. Adversarial example attacks manipulate inputs during the model’s inference phase to induce incorrect predictions. Poisoning attacks corrupt the training data, compromising the model’s availability by indiscriminately degrading its performance on legitimate inputs. Backdoor attacks, typically introduced during the training phase, undermine the model’s integrity, enabling attackers to trigger specific behaviors or outputs through predetermined hidden patterns. An example of compromising AI integrity for malicious purposes is DeepLocker, highlighted by IBM researchers. A detailed examination of attacks in each category is provided, emphasizing their tangible threats to real-world agricultural applications. To illustrate the practical implications, we conduct case studies on specific agricultural applications, focusing on precise irrigation schedules and plant disease detection, utilizing authentic agricultural datasets. Comprehensive countermeasures against each attack type are presented to assist agriculture stakeholders in actively safeguarding their AI applications. Additionally, we address challenges inherent in securing agriculture AI and offer our perspectives on mitigating security threats in this context. This work aims to equip agriculture stakeholders with the knowledge and tools necessary to fortify their AI systems against evolving security challenges. The artifacts of this work are released at https://github.com/garrisongys/Casestudy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers and Electronics in Agriculture
Computers and Electronics in Agriculture 工程技术-计算机:跨学科应用
CiteScore
15.30
自引率
14.50%
发文量
800
审稿时长
62 days
期刊介绍: Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信