通过声学操纵相机传感器实现对抗性计算机视觉

IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yushi Cheng, Xiaoyu Ji, Wenjun Zhu, Shibo Zhang, Kevin Fu, Wenyuan Xu
{"title":"通过声学操纵相机传感器实现对抗性计算机视觉","authors":"Yushi Cheng, Xiaoyu Ji, Wenjun Zhu, Shibo Zhang, Kevin Fu, Wenyuan Xu","doi":"10.1109/TDSC.2023.3334618","DOIUrl":null,"url":null,"abstract":"Autonomous vehicles increasingly rely on camera-based computer vision systems to perceive environments and make critical driving decisions. To improve image quality, image stabilizers with inertial sensors are added to reduce image blurring caused by camera jitters. However, this trend creates a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of emerging image stabilizer hardware susceptible to acoustic manipulation and computer vision algorithms subject to adversarial examples. By emitting deliberately designed acoustic signals, an adversary can control the output of an inertial sensor, which triggers unnecessary motion compensation and results in a blurred image, even when the camera is stable. These blurred images can induce object misclassification, affecting safety-critical decision-making. We model the feasibility of such acoustic manipulation and design an attack framework that can accomplish three types of attacks: hiding, creating, and altering objects. Evaluation results demonstrate the effectiveness of our attacks against five object detectors (YOLO V3/V4/V5, Faster R-CNN, and Apollo) and two lane detectors (UFLD and LaneAF). We further introduce the concept of AMpLe attacks, a new class of system-level security vulnerabilities resulting from a combination of adversarial machine learning and physics-based injection of information-carrying signals into hardware.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Adversarial Computer Vision via Acoustic Manipulation of Camera Sensors\",\"authors\":\"Yushi Cheng, Xiaoyu Ji, Wenjun Zhu, Shibo Zhang, Kevin Fu, Wenyuan Xu\",\"doi\":\"10.1109/TDSC.2023.3334618\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomous vehicles increasingly rely on camera-based computer vision systems to perceive environments and make critical driving decisions. To improve image quality, image stabilizers with inertial sensors are added to reduce image blurring caused by camera jitters. However, this trend creates a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of emerging image stabilizer hardware susceptible to acoustic manipulation and computer vision algorithms subject to adversarial examples. By emitting deliberately designed acoustic signals, an adversary can control the output of an inertial sensor, which triggers unnecessary motion compensation and results in a blurred image, even when the camera is stable. These blurred images can induce object misclassification, affecting safety-critical decision-making. We model the feasibility of such acoustic manipulation and design an attack framework that can accomplish three types of attacks: hiding, creating, and altering objects. Evaluation results demonstrate the effectiveness of our attacks against five object detectors (YOLO V3/V4/V5, Faster R-CNN, and Apollo) and two lane detectors (UFLD and LaneAF). We further introduce the concept of AMpLe attacks, a new class of system-level security vulnerabilities resulting from a combination of adversarial machine learning and physics-based injection of information-carrying signals into hardware.\",\"PeriodicalId\":13047,\"journal\":{\"name\":\"IEEE Transactions on Dependable and Secure Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Dependable and Secure Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/TDSC.2023.3334618\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Dependable and Secure Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TDSC.2023.3334618","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 1

摘要

自动驾驶汽车越来越依赖基于摄像头的计算机视觉系统来感知环境并做出关键的驾驶决策。为了提高图像质量,还增加了带有惯性传感器的图像稳定器,以减少摄像头抖动造成的图像模糊。然而,这种趋势带来了新的攻击面。本文指出了一个系统级漏洞,它是由易受声波操纵的新兴图像稳定器硬件与受对抗性示例影响的计算机视觉算法相结合而产生的。通过发射故意设计的声学信号,敌方可以控制惯性传感器的输出,从而触发不必要的运动补偿,导致图像模糊,即使相机处于稳定状态也是如此。这些模糊图像会导致物体分类错误,从而影响对安全至关重要的决策。我们模拟了这种声学操纵的可行性,并设计了一个攻击框架,可以完成三种类型的攻击:隐藏、创建和改变物体。评估结果表明,我们的攻击对五种物体检测器(YOLO V3/V4/V5、Faster R-CNN 和 Apollo)和两种车道检测器(UFLD 和 LaneAF)非常有效。我们进一步介绍了 AMpLe 攻击的概念,这是一类新的系统级安全漏洞,由对抗性机器学习和基于物理的信息携带信号注入硬件相结合而产生。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Computer Vision via Acoustic Manipulation of Camera Sensors
Autonomous vehicles increasingly rely on camera-based computer vision systems to perceive environments and make critical driving decisions. To improve image quality, image stabilizers with inertial sensors are added to reduce image blurring caused by camera jitters. However, this trend creates a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of emerging image stabilizer hardware susceptible to acoustic manipulation and computer vision algorithms subject to adversarial examples. By emitting deliberately designed acoustic signals, an adversary can control the output of an inertial sensor, which triggers unnecessary motion compensation and results in a blurred image, even when the camera is stable. These blurred images can induce object misclassification, affecting safety-critical decision-making. We model the feasibility of such acoustic manipulation and design an attack framework that can accomplish three types of attacks: hiding, creating, and altering objects. Evaluation results demonstrate the effectiveness of our attacks against five object detectors (YOLO V3/V4/V5, Faster R-CNN, and Apollo) and two lane detectors (UFLD and LaneAF). We further introduce the concept of AMpLe attacks, a new class of system-level security vulnerabilities resulting from a combination of adversarial machine learning and physics-based injection of information-carrying signals into hardware.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Dependable and Secure Computing
IEEE Transactions on Dependable and Secure Computing 工程技术-计算机:软件工程
CiteScore
11.20
自引率
5.50%
发文量
354
审稿时长
9 months
期刊介绍: The "IEEE Transactions on Dependable and Secure Computing (TDSC)" is a prestigious journal that publishes high-quality, peer-reviewed research in the field of computer science, specifically targeting the development of dependable and secure computing systems and networks. This journal is dedicated to exploring the fundamental principles, methodologies, and mechanisms that enable the design, modeling, and evaluation of systems that meet the required levels of reliability, security, and performance. The scope of TDSC includes research on measurement, modeling, and simulation techniques that contribute to the understanding and improvement of system performance under various constraints. It also covers the foundations necessary for the joint evaluation, verification, and design of systems that balance performance, security, and dependability. By publishing archival research results, TDSC aims to provide a valuable resource for researchers, engineers, and practitioners working in the areas of cybersecurity, fault tolerance, and system reliability. The journal's focus on cutting-edge research ensures that it remains at the forefront of advancements in the field, promoting the development of technologies that are critical for the functioning of modern, complex systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信