CEUSegNet:用于对比增强超声的跨模态病变分割网络

Zheling Meng, Yangyang Zhu, Xiao Fan, Jie Tian, F. Nie, Kun Wang
{"title":"CEUSegNet:用于对比增强超声的跨模态病变分割网络","authors":"Zheling Meng, Yangyang Zhu, Xiao Fan, Jie Tian, F. Nie, Kun Wang","doi":"10.1109/ISBI52829.2022.9761594","DOIUrl":null,"url":null,"abstract":"Contrast-enhanced ultrasound (CEUS) is an effective imaging tool to analyze spatial-temporal characteristics of lesions and diagnose or predict diseases. However, delineating lesions frame by frame is a time-consuming work, which brings challenges to analyzing CEUS videos with deep learning technology. In this paper, we proposed a novel U-net-like network with dual top-down branches and residual connections, named CEUSegNet. CEUSegNet takes US and CEUS part of a dual-amplitude CEUS image as inputs. Cross-modality Segmentation Attention (CSA) and Cross-modality Feature Fusion (CFF) are designed to fuse US and CEUS features on multiple scales. Through our method, lesion position can be determined exactly under the guidance of US and then the region of interest can be delineated in CEUS image. Results show CEUSegNet can achieve a comparable performance with clinicians on metastasis cervical lymph nodes and breast lesion dataset.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"170 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"CEUSegNet: A Cross-Modality Lesion Segmentation Network for Contrast-Enhanced Ultrasound\",\"authors\":\"Zheling Meng, Yangyang Zhu, Xiao Fan, Jie Tian, F. Nie, Kun Wang\",\"doi\":\"10.1109/ISBI52829.2022.9761594\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Contrast-enhanced ultrasound (CEUS) is an effective imaging tool to analyze spatial-temporal characteristics of lesions and diagnose or predict diseases. However, delineating lesions frame by frame is a time-consuming work, which brings challenges to analyzing CEUS videos with deep learning technology. In this paper, we proposed a novel U-net-like network with dual top-down branches and residual connections, named CEUSegNet. CEUSegNet takes US and CEUS part of a dual-amplitude CEUS image as inputs. Cross-modality Segmentation Attention (CSA) and Cross-modality Feature Fusion (CFF) are designed to fuse US and CEUS features on multiple scales. Through our method, lesion position can be determined exactly under the guidance of US and then the region of interest can be delineated in CEUS image. Results show CEUSegNet can achieve a comparable performance with clinicians on metastasis cervical lymph nodes and breast lesion dataset.\",\"PeriodicalId\":6827,\"journal\":{\"name\":\"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)\",\"volume\":\"170 1\",\"pages\":\"1-5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISBI52829.2022.9761594\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI52829.2022.9761594","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

超声造影(CEUS)是分析病变时空特征、诊断或预测疾病的有效成像工具。然而,逐帧描绘病灶是一项耗时的工作,这给利用深度学习技术分析超声造影视频带来了挑战。在本文中,我们提出了一种具有双自顶向下分支和残余连接的新型u -net网络,命名为CEUSegNet。CEUSegNet以双幅CEUS图像的US和CEUS部分作为输入。跨模态分割注意(CSA)和跨模态特征融合(CFF)的目的是在多尺度上融合US和CEUS特征。通过我们的方法,可以在超声的引导下准确地确定病灶位置,然后在超声图像中勾画出感兴趣的区域。结果表明,CEUSegNet在宫颈淋巴结转移和乳腺病变数据集上可以达到与临床医生相当的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CEUSegNet: A Cross-Modality Lesion Segmentation Network for Contrast-Enhanced Ultrasound
Contrast-enhanced ultrasound (CEUS) is an effective imaging tool to analyze spatial-temporal characteristics of lesions and diagnose or predict diseases. However, delineating lesions frame by frame is a time-consuming work, which brings challenges to analyzing CEUS videos with deep learning technology. In this paper, we proposed a novel U-net-like network with dual top-down branches and residual connections, named CEUSegNet. CEUSegNet takes US and CEUS part of a dual-amplitude CEUS image as inputs. Cross-modality Segmentation Attention (CSA) and Cross-modality Feature Fusion (CFF) are designed to fuse US and CEUS features on multiple scales. Through our method, lesion position can be determined exactly under the guidance of US and then the region of interest can be delineated in CEUS image. Results show CEUSegNet can achieve a comparable performance with clinicians on metastasis cervical lymph nodes and breast lesion dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信