Machine Learning in the Hands of a Malicious Adversary: A Near Future If Not Reality 1

Key-whan Chung, Xiao Li, Peicheng Tang, Zeran Zhu, Z. Kalbarczyk, T. Kesavadas, R. Iyer
{"title":"Machine Learning in the Hands of a Malicious Adversary: A Near Future If Not Reality\n 1","authors":"Key-whan Chung, Xiao Li, Peicheng Tang, Zeran Zhu, Z. Kalbarczyk, T. Kesavadas, R. Iyer","doi":"10.1002/9781119723950.ch15","DOIUrl":null,"url":null,"abstract":"Machine learning and artificial intelligence are being adopted to varying applications for automation and flexibility. Cyber security to be no different, researchers and engineers have been investigating the use of data‐driven technologies to harden the security of cyberinfrastructure and the possibility of attackers exploiting vulnerabilities in such technology (e.g. adversarial machine learning). However, not much work has investigated how attackers might try to take advantage of machine learning and AI technology against us. This chapter discusses the potential advances in targeted attacks through the utilization of machine learning techniques. In this chapter, we introduce a new concept of AI‐driven malware which advances already sophisticated cyber threats (i.e. advanced targeted attacks) that are on the rise. Furthermore, we demonstrate our prototype AI‐driven malware, built on top of a set of statistical learning technologies, on two distinct cyber‐physical systems (i.e. the Raven‐II surgical robot and a building automation system). Our experimental results demonstrate that with the support of AI technology, malware can mimic human attackers in deriving attack payloads that are custom to the target system and in determining the most opportune time to trigger the attack payload so to maximize the chance of success in realizing the malicious intent. No public records report a real threat driven by machine learning models. However, such advanced malware might already exist and simply remain undetected. We hope this chapter motivates further research on advanced offensive technologies, not to favor the adversaries, but to know them and be prepared.","PeriodicalId":332247,"journal":{"name":"Game Theory and Machine Learning for Cyber Security","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Game Theory and Machine Learning for Cyber Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/9781119723950.ch15","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Machine learning and artificial intelligence are being adopted to varying applications for automation and flexibility. Cyber security to be no different, researchers and engineers have been investigating the use of data‐driven technologies to harden the security of cyberinfrastructure and the possibility of attackers exploiting vulnerabilities in such technology (e.g. adversarial machine learning). However, not much work has investigated how attackers might try to take advantage of machine learning and AI technology against us. This chapter discusses the potential advances in targeted attacks through the utilization of machine learning techniques. In this chapter, we introduce a new concept of AI‐driven malware which advances already sophisticated cyber threats (i.e. advanced targeted attacks) that are on the rise. Furthermore, we demonstrate our prototype AI‐driven malware, built on top of a set of statistical learning technologies, on two distinct cyber‐physical systems (i.e. the Raven‐II surgical robot and a building automation system). Our experimental results demonstrate that with the support of AI technology, malware can mimic human attackers in deriving attack payloads that are custom to the target system and in determining the most opportune time to trigger the attack payload so to maximize the chance of success in realizing the malicious intent. No public records report a real threat driven by machine learning models. However, such advanced malware might already exist and simply remain undetected. We hope this chapter motivates further research on advanced offensive technologies, not to favor the adversaries, but to know them and be prepared.
恶意对手手中的机器学习:即使不是现实,也不远的将来
机器学习和人工智能正被用于各种自动化和灵活性的应用。网络安全也不例外,研究人员和工程师一直在研究数据驱动技术的使用,以加强网络基础设施的安全性,以及攻击者利用此类技术漏洞的可能性(例如对抗性机器学习)。然而,研究攻击者如何利用机器学习和人工智能技术来对付我们的工作并不多。本章讨论了利用机器学习技术进行针对性攻击的潜在进展。在本章中,我们介绍了人工智能驱动的恶意软件的新概念,它推进了已经复杂的网络威胁(即高级目标攻击),这些威胁正在上升。此外,我们在两种不同的网络物理系统(即Raven - II手术机器人和楼宇自动化系统)上展示了基于一组统计学习技术的原型AI驱动恶意软件。我们的实验结果表明,在人工智能技术的支持下,恶意软件可以模仿人类攻击者,获得针对目标系统的自定义攻击有效载荷,并确定触发攻击有效载荷的最佳时机,从而最大限度地提高实现恶意意图的成功机会。没有公开记录报告了由机器学习模型驱动的真正威胁。然而,这种高级恶意软件可能已经存在,只是未被发现。我们希望这一章能激发对先进进攻技术的进一步研究,不是为了偏袒对手,而是为了了解他们并做好准备。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信