Deep Learning to Localize Photoacoustic Sources in Three Dimensions: Theory and Implementation.

IF 3 2区 工程技术 Q1 ACOUSTICS
Mardava R Gubbi, Muyinatu A Lediju Bell
{"title":"Deep Learning to Localize Photoacoustic Sources in Three Dimensions: Theory and Implementation.","authors":"Mardava R Gubbi, Muyinatu A Lediju Bell","doi":"10.1109/TUFFC.2025.3562313","DOIUrl":null,"url":null,"abstract":"<p><p>Surgical tool tip localization and tracking are essential components of surgical and interventional procedures. The cross sections of tool tips can be considered as acoustic point sources to achieve these tasks with deep learning applied to photoacoustic channel data. However, source localization was previously limited to the lateral and axial dimensions of an ultrasound transducer. In this paper, we developed a novel deep learning-based three-dimensional (3D) photoacoustic point source localization system using an object detection-based approach extended from our previous work. In addition, we derived theoretical relationships among point source locations, sound speeds, and waveform shapes in raw photoacoustic channel data frames. We then used this theory to develop a novel deep learning instance segmentation-based 3D point source localization system. When tested with 4,000 simulated, 993 phantom, and 1,983 ex vivo channel data frames, the two systems achieved F1 scores as high as 99.82%, 93.05%, and 98.20%, respectively, and Euclidean localization errors (mean ± one standard deviation) as low as 1.46±1.11 mm, 1.58±1.30 mm, and 1.55±0.86 mm, respectively. In addition, the instance segmentation-based system simultaneously estimated sound speeds with absolute errors (mean ± one standard deviation) of 19.22±26.26 m/s in simulated data and standard deviations ranging 14.6-32.3 m/s in experimental data. These results demonstrate the potential of the proposed photoacoustic imaging-based methods to localize and track tool tips in three dimensions during surgical and interventional procedures.</p>","PeriodicalId":13322,"journal":{"name":"IEEE transactions on ultrasonics, ferroelectrics, and frequency control","volume":"PP ","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on ultrasonics, ferroelectrics, and frequency control","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/TUFFC.2025.3562313","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Surgical tool tip localization and tracking are essential components of surgical and interventional procedures. The cross sections of tool tips can be considered as acoustic point sources to achieve these tasks with deep learning applied to photoacoustic channel data. However, source localization was previously limited to the lateral and axial dimensions of an ultrasound transducer. In this paper, we developed a novel deep learning-based three-dimensional (3D) photoacoustic point source localization system using an object detection-based approach extended from our previous work. In addition, we derived theoretical relationships among point source locations, sound speeds, and waveform shapes in raw photoacoustic channel data frames. We then used this theory to develop a novel deep learning instance segmentation-based 3D point source localization system. When tested with 4,000 simulated, 993 phantom, and 1,983 ex vivo channel data frames, the two systems achieved F1 scores as high as 99.82%, 93.05%, and 98.20%, respectively, and Euclidean localization errors (mean ± one standard deviation) as low as 1.46±1.11 mm, 1.58±1.30 mm, and 1.55±0.86 mm, respectively. In addition, the instance segmentation-based system simultaneously estimated sound speeds with absolute errors (mean ± one standard deviation) of 19.22±26.26 m/s in simulated data and standard deviations ranging 14.6-32.3 m/s in experimental data. These results demonstrate the potential of the proposed photoacoustic imaging-based methods to localize and track tool tips in three dimensions during surgical and interventional procedures.

三维光声源定位的深度学习:理论与实现。
手术工具尖端定位和跟踪是外科手术和介入性手术的重要组成部分。工具尖端的横截面可以被视为声学点源,通过应用于光声通道数据的深度学习来实现这些任务。然而,源定位以前仅限于超声换能器的横向和轴向尺寸。在本文中,我们开发了一种新的基于深度学习的三维(3D)光声点源定位系统,该系统使用了基于目标检测的方法,扩展了我们之前的工作。此外,我们推导了原始光声通道数据帧中点源位置、声速和波形形状之间的理论关系。然后,我们利用这一理论开发了一种新的基于深度学习实例分割的三维点源定位系统。在4000个模拟、993个幻影和1983个离体通道数据帧的测试中,两种系统的F1得分分别高达99.82%、93.05%和98.20%,欧氏定位误差(平均±一个标准差)分别低至1.46±1.11 mm、1.58±1.30 mm和1.55±0.86 mm。此外,基于实例分割的系统同时估计声速,模拟数据的绝对误差(平均值±一个标准差)为19.22±26.26 m/s,实验数据的标准偏差为14.6-32.3 m/s。这些结果证明了所提出的基于光声成像的方法在外科手术和介入性手术过程中定位和跟踪工具尖端三维的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
16.70%
发文量
583
审稿时长
4.5 months
期刊介绍: IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control includes the theory, technology, materials, and applications relating to: (1) the generation, transmission, and detection of ultrasonic waves and related phenomena; (2) medical ultrasound, including hyperthermia, bioeffects, tissue characterization and imaging; (3) ferroelectric, piezoelectric, and piezomagnetic materials, including crystals, polycrystalline solids, films, polymers, and composites; (4) frequency control, timing and time distribution, including crystal oscillators and other means of classical frequency control, and atomic, molecular and laser frequency control standards. Areas of interest range from fundamental studies to the design and/or applications of devices and systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信