Masked Modeling-Based Ultrasound Image Classification via Self-Supervised Learning

IF 2.7 Q3 ENGINEERING, BIOMEDICAL
Kele Xu;Kang You;Boqing Zhu;Ming Feng;Dawei Feng;Cheng Yang
{"title":"Masked Modeling-Based Ultrasound Image Classification via Self-Supervised Learning","authors":"Kele Xu;Kang You;Boqing Zhu;Ming Feng;Dawei Feng;Cheng Yang","doi":"10.1109/OJEMB.2024.3374966","DOIUrl":null,"url":null,"abstract":"Recently, deep learning-based methods have emerged as the preferred approach for ultrasound data analysis. However, these methods often require large-scale annotated datasets for training deep models, which are not readily available in practical scenarios. Additionally, the presence of speckle noise and other imaging artifacts can introduce numerous hard examples for ultrasound data classification. In this paper, drawing inspiration from self-supervised learning techniques, we present a pre-training method based on mask modeling specifically designed for ultrasound data. Our study investigates three different mask modeling strategies: random masking, vertical masking, and horizontal masking. By employing these strategies, our pre-training approach aims to predict the masked portion of the ultrasound images. Notably, our method does not rely on externally labeled data, allowing us to extract representative features without the need for human annotation. Consequently, we can leverage unlabeled datasets for pre-training. Furthermore, to address the challenges posed by hard samples in ultrasound data, we propose a novel hard sample mining strategy. To evaluate the effectiveness of our proposed method, we conduct experiments on two datasets. The experimental results demonstrate that our approach outperforms other state-of-the-art methods in ultrasound image classification. This indicates the superiority of our pre-training method and its ability to extract discriminative features from ultrasound data, even in the presence of hard examples.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"5 ","pages":"226-237"},"PeriodicalIF":2.7000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10463101","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Engineering in Medicine and Biology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10463101/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, deep learning-based methods have emerged as the preferred approach for ultrasound data analysis. However, these methods often require large-scale annotated datasets for training deep models, which are not readily available in practical scenarios. Additionally, the presence of speckle noise and other imaging artifacts can introduce numerous hard examples for ultrasound data classification. In this paper, drawing inspiration from self-supervised learning techniques, we present a pre-training method based on mask modeling specifically designed for ultrasound data. Our study investigates three different mask modeling strategies: random masking, vertical masking, and horizontal masking. By employing these strategies, our pre-training approach aims to predict the masked portion of the ultrasound images. Notably, our method does not rely on externally labeled data, allowing us to extract representative features without the need for human annotation. Consequently, we can leverage unlabeled datasets for pre-training. Furthermore, to address the challenges posed by hard samples in ultrasound data, we propose a novel hard sample mining strategy. To evaluate the effectiveness of our proposed method, we conduct experiments on two datasets. The experimental results demonstrate that our approach outperforms other state-of-the-art methods in ultrasound image classification. This indicates the superiority of our pre-training method and its ability to extract discriminative features from ultrasound data, even in the presence of hard examples.
通过自我监督学习进行基于掩蔽建模的超声图像分类
最近,基于深度学习的方法已成为超声数据分析的首选方法。然而,这些方法通常需要大规模标注数据集来训练深度模型,而在实际应用中并不容易获得。此外,斑点噪声和其他成像伪影的存在也会为超声数据分类带来大量困难实例。在本文中,我们从自我监督学习技术中汲取灵感,提出了一种基于掩膜建模的预训练方法,专门用于超声波数据。我们的研究调查了三种不同的掩码建模策略:随机掩码、垂直掩码和水平掩码。通过采用这些策略,我们的预训练方法旨在预测超声图像的掩蔽部分。值得注意的是,我们的方法不依赖外部标记数据,因此无需人工标注即可提取代表性特征。因此,我们可以利用无标记数据集进行预训练。此外,为了应对超声数据中硬样本带来的挑战,我们提出了一种新颖的硬样本挖掘策略。为了评估我们提出的方法的有效性,我们在两个数据集上进行了实验。实验结果表明,在超声图像分类方面,我们的方法优于其他最先进的方法。这表明了我们的预训练方法的优越性及其从超声波数据中提取判别特征的能力,即使在存在硬样本的情况下也是如此。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
9.50
自引率
3.40%
发文量
20
审稿时长
10 weeks
期刊介绍: The IEEE Open Journal of Engineering in Medicine and Biology (IEEE OJEMB) is dedicated to serving the community of innovators in medicine, technology, and the sciences, with the core goal of advancing the highest-quality interdisciplinary research between these disciplines. The journal firmly believes that the future of medicine depends on close collaboration between biology and technology, and that fostering interaction between these fields is an important way to advance key discoveries that can improve clinical care.IEEE OJEMB is a gold open access journal in which the authors retain the copyright to their papers and readers have free access to the full text and PDFs on the IEEE Xplore® Digital Library. However, authors are required to pay an article processing fee at the time their paper is accepted for publication, using to cover the cost of publication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信