Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop最新文献

筛选
英文 中文
Toward Evaluating the Robustness of Deep Learning Based Rain Removal Algorithm in Autonomous Driving 基于深度学习的自动驾驶除雨算法鲁棒性评估
Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop Pub Date : 2023-07-10 DOI: 10.1145/3591197.3591309
Yiming Qin, Jincheng Hu, Bang Wu
{"title":"Toward Evaluating the Robustness of Deep Learning Based Rain Removal Algorithm in Autonomous Driving","authors":"Yiming Qin, Jincheng Hu, Bang Wu","doi":"10.1145/3591197.3591309","DOIUrl":"https://doi.org/10.1145/3591197.3591309","url":null,"abstract":"Autonomous driving systems have been widely adopted by automobile manufacturers, ushering in a new era of intelligent transportation. While adverse weather conditions continue to pose a significant challenge to its commercial application, as they can impact sensor data, degrade the quality of image transmission, and pose safety risks. Using neural network models to remove rain has shown significant promise in addressing this problem. The learning-based rain-removal algorithm discovers the deep connection between rainy pictures and non-rainy pictures by mining the information on raindrops and rain patterns. Nevertheless, the robustness of these rain removal algorithms was not considered, which poses a threat to autonomous vehicles. In this paper, we propose an optimized CW adversarial sample attack to explore the robustness of the rain removal algorithm. In our attacks, we generate a perturbation index of structural similarity that is difficult to detect through human vision and image pixel analysis, causing the similarity and image quality of the restored scene to be significantly degraded. To validate the realistic attack potential of the proposed method, a pre-trained State-of-the-art rain removal attack algorithm, RainCCN, is used as a potential victim of the proposed attack method. We demonstrate the effectiveness of our approach against a state-of-the-art rain removal algorithm, RainCCN, and show that we can reduce PSNR by 39.5 and SSIM by 26.4.","PeriodicalId":128846,"journal":{"name":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132046290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A First Look at the Security of EEG-based Systems and Intelligent Algorithms under Physical Signal Injections 物理信号注入下基于脑电图的系统和智能算法的安全性初探
Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop Pub Date : 2023-07-10 DOI: 10.1145/3591197.3591304
Md Imran Hossen, Yazhou Tu, X. Hei
{"title":"A First Look at the Security of EEG-based Systems and Intelligent Algorithms under Physical Signal Injections","authors":"Md Imran Hossen, Yazhou Tu, X. Hei","doi":"10.1145/3591197.3591304","DOIUrl":"https://doi.org/10.1145/3591197.3591304","url":null,"abstract":"Electroencephalography (EEG) based systems utilize machine learning (ML) and deep learning (DL) models in various applications such as seizure detection, emotion recognition, cognitive workload estimation, and brain-computer interface (BCI). However, the security and robustness of such intelligent systems under analog-domain threats have received limited attention. This paper presents the first demonstration of physical signal injection attacks on ML and DL models utilizing EEG data. We investigate how an adversary can degrade the performance of different models by non-invasively injecting signals into EEG recordings. We show that the attacks can mislead or manipulate the models and diminish the reliability of EEG-based systems. Overall, this research sheds light on the need for more trustworthy physiological-signal-based intelligent systems in the healthcare field and opens up avenues for future work.","PeriodicalId":128846,"journal":{"name":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121462397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Membership Inference Vulnerabilities in Peer-to-Peer Federated Learning 点对点联邦学习中的成员推理漏洞
Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop Pub Date : 2023-07-10 DOI: 10.1145/3591197.3593638
Alka Luqman, A. Chattopadhyay, Kwok-Yan Lam
{"title":"Membership Inference Vulnerabilities in Peer-to-Peer Federated Learning","authors":"Alka Luqman, A. Chattopadhyay, Kwok-Yan Lam","doi":"10.1145/3591197.3593638","DOIUrl":"https://doi.org/10.1145/3591197.3593638","url":null,"abstract":"Federated learning is emerging as an efficient approach to exploit data silos that form due to regulations about data sharing and usage, thereby leveraging distributed resources to improve the learning of ML models. It is a fitting technology for cyber physical systems in applications like connected autonomous vehicles, smart farming, IoT surveillance etc. By design, every participant in federated learning has access to the latest ML model. In such a scenario, it becomes all the more important to protect the model’s knowledge, and to keep the training data and its properties private. In this paper, we survey the literature of ML attacks to assess the risks that apply in a peer-to-peer (P2P) federated learning setup. We perform membership inference attacks specifically in a P2P federated learning setting with colluding adversaries to evaluate the privacy-accuracy trade offs in a deep neural network thus demonstrating the extent of data leakage possible.","PeriodicalId":128846,"journal":{"name":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116845348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Enhanced Knowledge Transfer with Collaborative Split Learning over Teacher Ensembles 基于教师团队的协作式分割学习的隐私增强知识转移
Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop Pub Date : 2023-07-10 DOI: 10.1145/3591197.3591303
Ziyao Liu, Jiale Guo, Mengmeng Yang, Wenzhuo Yang, Jiani Fan, Kwok-Yan Lam
{"title":"Privacy-Enhanced Knowledge Transfer with Collaborative Split Learning over Teacher Ensembles","authors":"Ziyao Liu, Jiale Guo, Mengmeng Yang, Wenzhuo Yang, Jiani Fan, Kwok-Yan Lam","doi":"10.1145/3591197.3591303","DOIUrl":"https://doi.org/10.1145/3591197.3591303","url":null,"abstract":"Knowledge Transfer has received much attention for its ability to transfer knowledge, rather than data, from one application task to another. In order to comply with the stringent data privacy regulations, privacy-preserving knowledge transfer is highly desirable. The Private Aggregation of Teacher Ensembles (PATE) scheme is one promising approach to address this privacy concern while supporting knowledge transfer from an ensemble of \"teacher\" models to a \"student\" model under the coordination of an aggregator. To further protect the data privacy of the student node, the privacy-enhanced version of PATE makes use of cryptographic techniques at the expense of heavy computation overheads at the teacher nodes. However, this inevitably hinders the adoption of knowledge transfer due to the highly disparate computational capability of teachers. Besides, in real-life systems, participating teachers may drop out of the system at any time, which causes new security risks for adopted cryptographic building blocks. Thus, it is desirable to devise privacy-enhanced knowledge transfer that can run on teacher nodes with relatively fewer computational resources and can preserve privacy with dropped teacher nodes. In this connection, we propose a dropout-resilient and privacy-enhanced knowledge transfer scheme, Collaborative Split learning over Teacher Ensembles (CSTE), that supports the participating teacher nodes to train and infer their local models using split learning. CSTE not only allows the compute-intensive processing to be performed at a split learning server, but also protects the data privacy of teacher nodes from collusion between the student node and aggregator. Experimental results showed that CSTE achieves significant efficiency improvement from existing schemes.","PeriodicalId":128846,"journal":{"name":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131420988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-class Detection for Off The Shelf transfer-based Black Box Attacks 基于传输的黑盒攻击多类检测
Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop Pub Date : 2023-07-10 DOI: 10.1145/3591197.3591305
Niklas Bunzel, Dominic Böringer
{"title":"Multi-class Detection for Off The Shelf transfer-based Black Box Attacks","authors":"Niklas Bunzel, Dominic Böringer","doi":"10.1145/3591197.3591305","DOIUrl":"https://doi.org/10.1145/3591197.3591305","url":null,"abstract":"Nowadays, deep neural networks are used for a variety of tasks in a wide range of application areas. Despite achieving state-of-the-art results in computer vision and image classification tasks, neural networks are vulnerable to adversarial attacks. Various attacks have been presented in which small perturbations of an input image are sufficient to change the predictions of a model. Furthermore, the changes in the input image are imperceptible to the human eye. In this paper, we propose a multi-class detector framework based on image statistics. We implemented a detection scheme for each attack and evaluated our detectors against Attack on Attention (AoA) and FGSM achieving a detection rate of 70% and 75% respectivley, with a FPR of . The multi-class detector identifies 77% of attacks as adversarial, while remaining 90% of the benign images, demonstrating that we can detect out-of-the-box attacks.","PeriodicalId":128846,"journal":{"name":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121762745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Latency Attacks to On-Device Neural Networks via Sponge Poisoning 通过海绵中毒对设备上神经网络的能量延迟攻击
Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop Pub Date : 2023-05-06 DOI: 10.1145/3591197.3591307
Zijian Wang, Shuo Huang, Yu-Jen Huang, Helei Cui
{"title":"Energy-Latency Attacks to On-Device Neural Networks via Sponge Poisoning","authors":"Zijian Wang, Shuo Huang, Yu-Jen Huang, Helei Cui","doi":"10.1145/3591197.3591307","DOIUrl":"https://doi.org/10.1145/3591197.3591307","url":null,"abstract":"In recent years, on-device deep learning has gained attention as a means of developing affordable deep learning applications for mobile devices. However, on-device models are constrained by limited energy and computation resources. In the mean time, a poisoning attack known as sponge poisoning has been developed.This attack involves feeding the model with poisoned examples to increase the energy consumption during inference. As previous work is focusing on server hardware accelerators, in this work, we extend the sponge poisoning attack to an on-device scenario to evaluate the vulnerability of mobile device processors. We present an on-device sponge poisoning attack pipeline to simulate the streaming and consistent inference scenario to bridge the knowledge gap in the on-device setting. Our exclusive experimental analysis with processors and on-device networks shows that sponge poisoning attacks can effectively pollute the modern processor with its built-in accelerator. We analyze the impact of different factors in the sponge poisoning algorithm and highlight the need for improved defense mechanisms to prevent such attacks on on-device deep learning applications.","PeriodicalId":128846,"journal":{"name":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128332516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Privacy-Preserving Distributed Machine Learning Made Faster 保护隐私的分布式机器学习变得更快
Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop Pub Date : 2022-05-12 DOI: 10.1145/3591197.3591306
Z. L. Jiang, Jiajing Gu, Hongxiao Wang, Yulin Wu, Jun-bin Fang, S. Yiu, Wenjian Luo, Xuan Wang
{"title":"Privacy-Preserving Distributed Machine Learning Made Faster","authors":"Z. L. Jiang, Jiajing Gu, Hongxiao Wang, Yulin Wu, Jun-bin Fang, S. Yiu, Wenjian Luo, Xuan Wang","doi":"10.1145/3591197.3591306","DOIUrl":"https://doi.org/10.1145/3591197.3591306","url":null,"abstract":"With the development of machine learning, it is difficult for a single server to process all the data. So machine learning tasks need to be spread across multiple servers, turning the centralized machine learning into a distributed one. Multi-key homomorphic encryption is one of the suitable candidates to solve the problem. However, the most recent result of the Multi-key homomorphic encryption scheme (MKTFHE) only supports the NAND gate. Although it is Turing complete, it requires efficient encapsulation of the NAND gate to further support mathematical calculation. This paper designs and implements a series of operations on positive and negative integers accurately. First, we design basic bootstrapped gates, the efficiency of which is times that the number of using NAND to build. Second, we construct practical k-bit complement mathematical operators based on our basic binary bootstrapped gates. The constructed created can perform addition, subtraction, multiplication, and division on both positive and negative integers. Finally, we demonstrated the generality of the designed operators by achieving a distributed privacy-preserving machine learning algorithm, i.e. linear regression with two different solutions. Experiments show that the consumption time of the operators built with our gate is about 50 ∼ 70% shorter than built directly with NAND gate and the iteration time of linear regression with our gates is 66.7% shorter than with NAND gate directly.","PeriodicalId":128846,"journal":{"name":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128513933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop 2023安全与可信赖深度学习系统研讨会论文集
{"title":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","authors":"","doi":"10.1145/3591197","DOIUrl":"https://doi.org/10.1145/3591197","url":null,"abstract":"","PeriodicalId":128846,"journal":{"name":"Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131265938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信