Manual and automated facial de-identification techniques for patient imaging with preservation of sinonasal anatomy.

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL
Andy S Ding, Nimesh V Nagururu, Stefanie Seo, George S Liu, Manish Sahu, Russell H Taylor, Francis X Creighton
{"title":"Manual and automated facial de-identification techniques for patient imaging with preservation of sinonasal anatomy.","authors":"Andy S Ding, Nimesh V Nagururu, Stefanie Seo, George S Liu, Manish Sahu, Russell H Taylor, Francis X Creighton","doi":"10.1007/s11548-025-03421-1","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Facial recognition of reconstructed computed tomography (CT) scans poses patient privacy risks, necessitating reliable facial de-identification methods. Current methods obscure sinuses, turbinates, and other anatomy relevant for otolaryngology. We present a facial de-identification method that preserves these structures, along with two automated workflows for large-volume datasets.</p><p><strong>Methods: </strong>A total of 20 adult head CTs from the New Mexico Decedent Image Database were included. Using 3D Slicer, a seed-growing technique was performed to label the skin around the face. This label was dilated bidirectionally to form a 6-mm mask that obscures facial features. This technique was then automated using: (1) segmentation propagation that deforms an atlas head CT and corresponding mask to match other scans and (2) a deep learning model (nnU-Net). Accuracy of these methods against manually generated masks was evaluated with Dice scores and modified Hausdorff distances (mHDs).</p><p><strong>Results: </strong>Manual de-identification resulted in facial match rates of 45.0% (zero-fill), 37.5% (deletion), and 32.5% (re-face). Dice scores for automated face masks using segmentation propagation and nnU-Net were 0.667 ± 0.109 and 0.860 ± 0.029, respectively, with mHDs of 4.31 ± 3.04 mm and 1.55 ± 0.71 mm. Match rates after de-identification using segmentation propagation (zero-fill: 42.5%; deletion: 40.0%; re-face: 35.0%) and nnU-Net (zero-fill: 42.5%; deletion: 35.0%; re-face: 30.0%) were comparable to manual masks.</p><p><strong>Conclusion: </strong>We present a simple facial de-identification approach for head CTs, as well as automated methods for large-scale implementation. These techniques show promise for preventing patient identification while preserving underlying sinonasal anatomy, but further studies using live patient photographs are necessary to fully validate its effectiveness.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2167-2177"},"PeriodicalIF":2.3000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-025-03421-1","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/5/29 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Facial recognition of reconstructed computed tomography (CT) scans poses patient privacy risks, necessitating reliable facial de-identification methods. Current methods obscure sinuses, turbinates, and other anatomy relevant for otolaryngology. We present a facial de-identification method that preserves these structures, along with two automated workflows for large-volume datasets.

Methods: A total of 20 adult head CTs from the New Mexico Decedent Image Database were included. Using 3D Slicer, a seed-growing technique was performed to label the skin around the face. This label was dilated bidirectionally to form a 6-mm mask that obscures facial features. This technique was then automated using: (1) segmentation propagation that deforms an atlas head CT and corresponding mask to match other scans and (2) a deep learning model (nnU-Net). Accuracy of these methods against manually generated masks was evaluated with Dice scores and modified Hausdorff distances (mHDs).

Results: Manual de-identification resulted in facial match rates of 45.0% (zero-fill), 37.5% (deletion), and 32.5% (re-face). Dice scores for automated face masks using segmentation propagation and nnU-Net were 0.667 ± 0.109 and 0.860 ± 0.029, respectively, with mHDs of 4.31 ± 3.04 mm and 1.55 ± 0.71 mm. Match rates after de-identification using segmentation propagation (zero-fill: 42.5%; deletion: 40.0%; re-face: 35.0%) and nnU-Net (zero-fill: 42.5%; deletion: 35.0%; re-face: 30.0%) were comparable to manual masks.

Conclusion: We present a simple facial de-identification approach for head CTs, as well as automated methods for large-scale implementation. These techniques show promise for preventing patient identification while preserving underlying sinonasal anatomy, but further studies using live patient photographs are necessary to fully validate its effectiveness.

保留鼻窦解剖结构的患者成像的手动和自动面部去识别技术。
目的:重建CT扫描的面部识别存在患者隐私风险,需要可靠的面部去识别方法。目前的方法模糊了鼻窦、鼻甲和其他与耳鼻喉科相关的解剖结构。我们提出了一种保留这些结构的面部去识别方法,以及两个用于大容量数据集的自动化工作流程。方法:从新墨西哥州影像数据库中选取20张成人头部ct。使用3D切片器,进行种子生长技术来标记面部周围的皮肤。该标签双向扩张,形成一个6毫米的掩膜,掩盖面部特征。然后使用以下方法自动化该技术:(1)分割传播,使atlas头部CT和相应掩码变形以匹配其他扫描;(2)深度学习模型(nnU-Net)。这些方法对人工生成掩码的准确性用Dice分数和修改的Hausdorff距离(mhd)进行了评估。结果:人工去识别的面部匹配率为45.0%(零填充)、37.5%(删除)和32.5%(重新填充)。分割传播和nnU-Net自动口罩的Dice得分分别为0.667±0.109和0.860±0.029,mdd分别为4.31±3.04 mm和1.55±0.71 mm。使用分割传播去识别后的匹配率(零填充:42.5%;删除:40.0%;re-face: 35.0%)和nnU-Net(零填充:42.5%;删除:35.0%;重新敷面:30.0%)与手工敷面相当。结论:我们提出了一种简单的头部ct面部去识别方法,以及大规模实施的自动化方法。这些技术显示了在保留潜在鼻窦解剖结构的同时防止患者身份识别的希望,但需要使用患者活体照片的进一步研究来充分验证其有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信