Integrating Eye Tracking With Grouped Fusion Networks for Semantic Segmentation on Mammogram Images

Jiaming Xie;Qing Zhang;Zhiming Cui;Chong Ma;Yan Zhou;Wenping Wang;Dinggang Shen
{"title":"Integrating Eye Tracking With Grouped Fusion Networks for Semantic Segmentation on Mammogram Images","authors":"Jiaming Xie;Qing Zhang;Zhiming Cui;Chong Ma;Yan Zhou;Wenping Wang;Dinggang Shen","doi":"10.1109/TMI.2024.3468404","DOIUrl":null,"url":null,"abstract":"Medical image segmentation has seen great progress in recent years, largely due to the development of deep neural networks. However, unlike in computer vision, high-quality clinical data is relatively scarce, and the annotation process is often a burden for clinicians. As a result, the scarcity of medical data limits the performance of existing medical image segmentation models. In this paper, we propose a novel framework that integrates eye tracking information from experienced radiologists during the screening process to improve the performance of deep neural networks with limited data. Our approach, a grouped hierarchical network, guides the network to learn from its faults by using gaze information as weak supervision. We demonstrate the effectiveness of our framework on mammogram images, particularly for handling segmentation classes with large scale differences. We evaluate the impact of gaze information on medical image segmentation tasks and show that our method achieves better segmentation performance compared to state-of-the-art models. A robustness study is conducted to investigate the influence of distraction or inaccuracies in gaze collection. We also develop a convenient system for collecting gaze data without interrupting the normal clinical workflow. Our work offers novel insights into the potential benefits of integrating gaze information into medical image segmentation tasks.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"868-879"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10697394/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Medical image segmentation has seen great progress in recent years, largely due to the development of deep neural networks. However, unlike in computer vision, high-quality clinical data is relatively scarce, and the annotation process is often a burden for clinicians. As a result, the scarcity of medical data limits the performance of existing medical image segmentation models. In this paper, we propose a novel framework that integrates eye tracking information from experienced radiologists during the screening process to improve the performance of deep neural networks with limited data. Our approach, a grouped hierarchical network, guides the network to learn from its faults by using gaze information as weak supervision. We demonstrate the effectiveness of our framework on mammogram images, particularly for handling segmentation classes with large scale differences. We evaluate the impact of gaze information on medical image segmentation tasks and show that our method achieves better segmentation performance compared to state-of-the-art models. A robustness study is conducted to investigate the influence of distraction or inaccuracies in gaze collection. We also develop a convenient system for collecting gaze data without interrupting the normal clinical workflow. Our work offers novel insights into the potential benefits of integrating gaze information into medical image segmentation tasks.
将眼动跟踪与分组融合网络相结合,实现乳腺 X 射线图像的语义分割
医学图像分割近年来取得了很大的进展,很大程度上得益于深度神经网络的发展。然而,与计算机视觉不同的是,高质量的临床数据相对稀缺,标注过程往往是临床医生的负担。因此,医学数据的稀缺性限制了现有医学图像分割模型的性能。在本文中,我们提出了一个新的框架,该框架集成了筛查过程中经验丰富的放射科医生的眼动追踪信息,以提高深度神经网络在有限数据下的性能。我们的方法是一个分组分层网络,通过使用凝视信息作为弱监督来引导网络从错误中学习。我们证明了我们的框架在乳房x光图像上的有效性,特别是在处理具有大规模差异的分割类方面。我们评估了凝视信息对医学图像分割任务的影响,并表明与最先进的模型相比,我们的方法实现了更好的分割性能。我们进行了一项稳健性研究,以调查注意力分散或不准确对凝视收集的影响。我们还开发了一个方便的系统来收集注视数据,而不会中断正常的临床工作流程。我们的工作为将凝视信息整合到医学图像分割任务中的潜在好处提供了新的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信