Privacy-preserving Federated Learning System for Fatigue Detection

Mohammadreza Mohammadi, R. Allocca, David Eklund, Rakesh Shrestha, Sima Sinaei
{"title":"Privacy-preserving Federated Learning System for Fatigue Detection","authors":"Mohammadreza Mohammadi, R. Allocca, David Eklund, Rakesh Shrestha, Sima Sinaei","doi":"10.1109/CSR57506.2023.10224953","DOIUrl":null,"url":null,"abstract":"Context:. Drowsiness affects the driver's cognitive abilities, which are all important for safe driving. Fatigue detection is a critical technique to avoid traffic accidents. Data sharing among vehicles can be used to optimize fatigue detection models and ensure driving safety. However, data privacy issues hinder the sharing process. To tackle these challenges, we propose a Federated Learning (FL) approach for fatigue-driving behavior monitoring. However, in the FL system, the privacy information of the drivers might be leaked. In this paper, we propose to combine the concept of differential privacy (DP) with Federated Learning for the fatigue detection application, in which artificial noise is added to parameters at the drivers' side before aggregating. This approach will ensure the privacy of drivers' data and the convergence of the federated learning algorithms. In this paper, the privacy level in the system is determined in order to achieve a balance between the noise scale and the model's accuracy. In addition, we have evaluated our models resistance against a model inversion attack. The effectiveness of the attack is measured by the Mean Squared Error (MSE) between the reconstructed data point and the training data. The proposed approach, compared to the non-DP case, has a 6% accuracy loss while decreasing the effectiveness of the attacks by increasing the MSE from 5.0 to 7.0, so a balance between accuracy and noise scale is achieved.","PeriodicalId":354918,"journal":{"name":"2023 IEEE International Conference on Cyber Security and Resilience (CSR)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Cyber Security and Resilience (CSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSR57506.2023.10224953","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Context:. Drowsiness affects the driver's cognitive abilities, which are all important for safe driving. Fatigue detection is a critical technique to avoid traffic accidents. Data sharing among vehicles can be used to optimize fatigue detection models and ensure driving safety. However, data privacy issues hinder the sharing process. To tackle these challenges, we propose a Federated Learning (FL) approach for fatigue-driving behavior monitoring. However, in the FL system, the privacy information of the drivers might be leaked. In this paper, we propose to combine the concept of differential privacy (DP) with Federated Learning for the fatigue detection application, in which artificial noise is added to parameters at the drivers' side before aggregating. This approach will ensure the privacy of drivers' data and the convergence of the federated learning algorithms. In this paper, the privacy level in the system is determined in order to achieve a balance between the noise scale and the model's accuracy. In addition, we have evaluated our models resistance against a model inversion attack. The effectiveness of the attack is measured by the Mean Squared Error (MSE) between the reconstructed data point and the training data. The proposed approach, compared to the non-DP case, has a 6% accuracy loss while decreasing the effectiveness of the attacks by increasing the MSE from 5.0 to 7.0, so a balance between accuracy and noise scale is achieved.
保护隐私的疲劳检测联邦学习系统
背景:。困倦会影响驾驶员的认知能力,而认知能力对安全驾驶至关重要。疲劳检测是避免交通事故发生的关键技术。车辆之间的数据共享可以优化疲劳检测模型,确保驾驶安全。然而,数据隐私问题阻碍了共享过程。为了应对这些挑战,我们提出了一种用于疲劳驾驶行为监测的联邦学习(FL)方法。但是,在FL系统中,可能会泄露驾驶员的隐私信息。在本文中,我们提出将差分隐私(DP)的概念与联邦学习结合起来用于疲劳检测应用,其中在聚合之前在驾驶员侧的参数中添加人工噪声。这种方法将确保驾驶员数据的隐私性和联邦学习算法的收敛性。为了在噪声尺度和模型精度之间取得平衡,本文确定了系统中的隐私级别。此外,我们还评估了我们的模型抵抗模型反转攻击的能力。攻击的有效性是通过重建数据点与训练数据之间的均方误差(MSE)来衡量的。与非dp情况相比,所提出的方法具有6%的精度损失,同时通过将MSE从5.0提高到7.0来降低攻击的有效性,因此实现了精度和噪声尺度之间的平衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信