Using Positional Tracking to Improve Abdominal Ultrasound Machine Learning Classification

Alistair Lawley, Rory Hampson, Kevin Worrall, Gordon Dobie
{"title":"Using Positional Tracking to Improve Abdominal Ultrasound Machine Learning Classification","authors":"Alistair Lawley, Rory Hampson, Kevin Worrall, Gordon Dobie","doi":"10.1088/2632-2153/ad379d","DOIUrl":null,"url":null,"abstract":"\n Diagnostic abdominal ultrasound screening and monitoring protocols are based around gathering a set of standard cross sectional images that ensure the coverage of relevant anatomical structures during the collection procedure. This allows clinicians to make diagnostic decisions with the best picture available from that modality. Currently, there is very little assistance provided to sonographers to ensure aderence to collection protocols, with previous studies suggesting that traditional image only machine learning classification can provide only limited assistance in supporting this task, for example it can be difficult to differentiate between multiple liver cross sections or those of the left and right kidney from image post collection. In this proof of concept, positional tracking information was added to the image input of a neural network to provide the additional context required to recognize six otherwise difficult to identify edge cases. In this paper optical and sensor based infrared tracking (IR) was used to track the position of an ultrasound probe during the collection of clinical cross sections on an abdominal phantom. Convolutional neural networks were then trained using both image-only and image with positional data, the classification accuracy results were then compared. The addition of positional information significantly improved average classification results from ~90% for image-only to 95% for optical IR position tracking and 93% for Sensor-based IR in common abdominal cross sections. While there is further work to be done, the addition of low-cost positional tracking to machine learning ultrasound classification will allow for significantly increased accuracy for identifying important diagnostic cross sections, with the potential to not only provide validation of adherence to protocol but also could provide navigation prompts to assist in user training and in ensuring aderence in capturing cross sections in future.","PeriodicalId":503691,"journal":{"name":"Machine Learning: Science and Technology","volume":"10 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Learning: Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2632-2153/ad379d","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Diagnostic abdominal ultrasound screening and monitoring protocols are based around gathering a set of standard cross sectional images that ensure the coverage of relevant anatomical structures during the collection procedure. This allows clinicians to make diagnostic decisions with the best picture available from that modality. Currently, there is very little assistance provided to sonographers to ensure aderence to collection protocols, with previous studies suggesting that traditional image only machine learning classification can provide only limited assistance in supporting this task, for example it can be difficult to differentiate between multiple liver cross sections or those of the left and right kidney from image post collection. In this proof of concept, positional tracking information was added to the image input of a neural network to provide the additional context required to recognize six otherwise difficult to identify edge cases. In this paper optical and sensor based infrared tracking (IR) was used to track the position of an ultrasound probe during the collection of clinical cross sections on an abdominal phantom. Convolutional neural networks were then trained using both image-only and image with positional data, the classification accuracy results were then compared. The addition of positional information significantly improved average classification results from ~90% for image-only to 95% for optical IR position tracking and 93% for Sensor-based IR in common abdominal cross sections. While there is further work to be done, the addition of low-cost positional tracking to machine learning ultrasound classification will allow for significantly increased accuracy for identifying important diagnostic cross sections, with the potential to not only provide validation of adherence to protocol but also could provide navigation prompts to assist in user training and in ensuring aderence in capturing cross sections in future.
利用位置跟踪改进腹部超声机器学习分类
腹部超声诊断筛查和监测方案的基础是收集一套标准的横截面图像,以确保在采集过程中覆盖相关的解剖结构。这样,临床医生就能通过该方式获得的最佳图像做出诊断决定。目前,超声技师在确保遵守采集规程方面所能获得的帮助非常有限,之前的研究表明,传统的图像机器学习分类在支持这项任务方面只能提供有限的帮助,例如,从图像采集后很难区分多个肝脏横截面或左右肾脏的横截面。在这一概念验证中,位置跟踪信息被添加到神经网络的图像输入中,为识别六种难以识别的边缘情况提供了所需的额外背景。在本文中,基于光学和传感器的红外跟踪技术(IR)被用来跟踪超声探头在腹部模型上采集临床横截面时的位置。然后使用纯图像数据和带位置数据的图像对卷积神经网络进行了训练,并对分类准确性结果进行了比较。在常见的腹部横截面中,位置信息的添加大大提高了平均分类结果,从纯图像的约 90% 提高到光学红外位置跟踪的 95%,传感器红外的 93%。虽然还有进一步的工作要做,但在机器学习超声波分类中加入低成本的位置跟踪,将能大大提高识别重要诊断横截面的准确性,不仅有可能为遵守协议提供验证,还能提供导航提示,以协助用户培训,并确保今后在捕获横截面时的一致性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信