A deep dive into enhancing sharing of naturalistic driving data through face deidentification

Surendrabikram Thapa, Abhijit Sarkar
{"title":"A deep dive into enhancing sharing of naturalistic driving data through face deidentification","authors":"Surendrabikram Thapa, Abhijit Sarkar","doi":"10.1007/s00371-024-03552-7","DOIUrl":null,"url":null,"abstract":"<p>Human factors research in transportation relies on naturalistic driving studies (NDS) which collect real-world data from drivers on actual roads. NDS data offer valuable insights into driving behavior, styles, habits, and safety-critical events. However, these data often contain personally identifiable information (PII), such as driver face videos, which cannot be publicly shared due to privacy concerns. To address this, our paper introduces a comprehensive framework for deidentifying drivers’ face videos, that can facilitate the wide sharing of driver face videos while protecting PII. Leveraging recent advancements in generative adversarial networks (GANs), we explore the efficacy of different face swapping algorithms in preserving essential human factors attributes while anonymizing participants’ identities. Most face swapping algorithms are tested in restricted lighting conditions and indoor settings, there is no known study that tested them in adverse and natural situations. We conducted extensive experiments using large-scale outdoor NDS data, evaluating the quantification of errors associated with head, mouth, and eye movements, along with other attributes important for human factors research. Additionally, we performed qualitative assessments of these methods through human evaluators providing valuable insights into the quality and fidelity of the deidentified videos. We propose the utilization of synthetic faces as substitutes for real faces to enhance generalization. Additionally, we created practical guidelines for video deidentification, emphasizing error threshold creation, spot-checking for abrupt metric changes, and mitigation strategies for reidentification risks. Our findings underscore nuanced challenges in balancing data utility and privacy, offering valuable insights into enhancing face video deidentification techniques in NDS scenarios.\n</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03552-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Human factors research in transportation relies on naturalistic driving studies (NDS) which collect real-world data from drivers on actual roads. NDS data offer valuable insights into driving behavior, styles, habits, and safety-critical events. However, these data often contain personally identifiable information (PII), such as driver face videos, which cannot be publicly shared due to privacy concerns. To address this, our paper introduces a comprehensive framework for deidentifying drivers’ face videos, that can facilitate the wide sharing of driver face videos while protecting PII. Leveraging recent advancements in generative adversarial networks (GANs), we explore the efficacy of different face swapping algorithms in preserving essential human factors attributes while anonymizing participants’ identities. Most face swapping algorithms are tested in restricted lighting conditions and indoor settings, there is no known study that tested them in adverse and natural situations. We conducted extensive experiments using large-scale outdoor NDS data, evaluating the quantification of errors associated with head, mouth, and eye movements, along with other attributes important for human factors research. Additionally, we performed qualitative assessments of these methods through human evaluators providing valuable insights into the quality and fidelity of the deidentified videos. We propose the utilization of synthetic faces as substitutes for real faces to enhance generalization. Additionally, we created practical guidelines for video deidentification, emphasizing error threshold creation, spot-checking for abrupt metric changes, and mitigation strategies for reidentification risks. Our findings underscore nuanced challenges in balancing data utility and privacy, offering valuable insights into enhancing face video deidentification techniques in NDS scenarios.

Abstract Image

通过人脸识别加强自然驾驶数据共享的深入研究
交通领域的人为因素研究依赖于自然驾驶研究 (NDS),该研究收集驾驶员在实际道路上的真实数据。NDS 数据可提供有关驾驶行为、风格、习惯和安全关键事件的宝贵见解。然而,这些数据通常包含个人身份信息(PII),如驾驶员面部视频,出于隐私考虑,这些数据不能公开共享。为了解决这个问题,我们的论文介绍了一个用于去识别驾驶员面部视频的综合框架,它可以在保护 PII 的同时促进驾驶员面部视频的广泛共享。利用生成式对抗网络(GANs)的最新进展,我们探索了不同的人脸交换算法在匿名化参与者身份的同时保留基本人为因素属性的功效。大多数人脸互换算法都是在受限的照明条件和室内环境下进行测试的,目前还没有在不利的自然环境下进行测试的已知研究。我们使用大规模室外 NDS 数据进行了广泛的实验,评估了与头部、嘴部和眼部运动相关的误差量化,以及对人为因素研究非常重要的其他属性。此外,我们还通过人类评估员对这些方法进行了定性评估,从而对去识别视频的质量和保真度提供了宝贵的见解。我们建议使用合成人脸来替代真实人脸,以提高通用性。此外,我们还为视频去身份化制定了实用指南,强调误差阈值的创建、对突然的指标变化进行抽查,以及降低重新识别风险的策略。我们的研究结果强调了在平衡数据实用性和隐私性方面所面临的细微挑战,为在 NDS 场景中增强人脸视频去识别技术提供了宝贵的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信