Kele Xu;Kang You;Boqing Zhu;Ming Feng;Dawei Feng;Cheng Yang
{"title":"通过自我监督学习进行基于掩蔽建模的超声图像分类","authors":"Kele Xu;Kang You;Boqing Zhu;Ming Feng;Dawei Feng;Cheng Yang","doi":"10.1109/OJEMB.2024.3374966","DOIUrl":null,"url":null,"abstract":"Recently, deep learning-based methods have emerged as the preferred approach for ultrasound data analysis. However, these methods often require large-scale annotated datasets for training deep models, which are not readily available in practical scenarios. Additionally, the presence of speckle noise and other imaging artifacts can introduce numerous hard examples for ultrasound data classification. In this paper, drawing inspiration from self-supervised learning techniques, we present a pre-training method based on mask modeling specifically designed for ultrasound data. Our study investigates three different mask modeling strategies: random masking, vertical masking, and horizontal masking. By employing these strategies, our pre-training approach aims to predict the masked portion of the ultrasound images. Notably, our method does not rely on externally labeled data, allowing us to extract representative features without the need for human annotation. Consequently, we can leverage unlabeled datasets for pre-training. Furthermore, to address the challenges posed by hard samples in ultrasound data, we propose a novel hard sample mining strategy. To evaluate the effectiveness of our proposed method, we conduct experiments on two datasets. The experimental results demonstrate that our approach outperforms other state-of-the-art methods in ultrasound image classification. This indicates the superiority of our pre-training method and its ability to extract discriminative features from ultrasound data, even in the presence of hard examples.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"5 ","pages":"226-237"},"PeriodicalIF":2.7000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10463101","citationCount":"0","resultStr":"{\"title\":\"Masked Modeling-Based Ultrasound Image Classification via Self-Supervised Learning\",\"authors\":\"Kele Xu;Kang You;Boqing Zhu;Ming Feng;Dawei Feng;Cheng Yang\",\"doi\":\"10.1109/OJEMB.2024.3374966\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, deep learning-based methods have emerged as the preferred approach for ultrasound data analysis. However, these methods often require large-scale annotated datasets for training deep models, which are not readily available in practical scenarios. Additionally, the presence of speckle noise and other imaging artifacts can introduce numerous hard examples for ultrasound data classification. In this paper, drawing inspiration from self-supervised learning techniques, we present a pre-training method based on mask modeling specifically designed for ultrasound data. Our study investigates three different mask modeling strategies: random masking, vertical masking, and horizontal masking. By employing these strategies, our pre-training approach aims to predict the masked portion of the ultrasound images. Notably, our method does not rely on externally labeled data, allowing us to extract representative features without the need for human annotation. Consequently, we can leverage unlabeled datasets for pre-training. Furthermore, to address the challenges posed by hard samples in ultrasound data, we propose a novel hard sample mining strategy. To evaluate the effectiveness of our proposed method, we conduct experiments on two datasets. The experimental results demonstrate that our approach outperforms other state-of-the-art methods in ultrasound image classification. This indicates the superiority of our pre-training method and its ability to extract discriminative features from ultrasound data, even in the presence of hard examples.\",\"PeriodicalId\":33825,\"journal\":{\"name\":\"IEEE Open Journal of Engineering in Medicine and Biology\",\"volume\":\"5 \",\"pages\":\"226-237\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10463101\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of Engineering in Medicine and Biology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10463101/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Engineering in Medicine and Biology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10463101/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Masked Modeling-Based Ultrasound Image Classification via Self-Supervised Learning
Recently, deep learning-based methods have emerged as the preferred approach for ultrasound data analysis. However, these methods often require large-scale annotated datasets for training deep models, which are not readily available in practical scenarios. Additionally, the presence of speckle noise and other imaging artifacts can introduce numerous hard examples for ultrasound data classification. In this paper, drawing inspiration from self-supervised learning techniques, we present a pre-training method based on mask modeling specifically designed for ultrasound data. Our study investigates three different mask modeling strategies: random masking, vertical masking, and horizontal masking. By employing these strategies, our pre-training approach aims to predict the masked portion of the ultrasound images. Notably, our method does not rely on externally labeled data, allowing us to extract representative features without the need for human annotation. Consequently, we can leverage unlabeled datasets for pre-training. Furthermore, to address the challenges posed by hard samples in ultrasound data, we propose a novel hard sample mining strategy. To evaluate the effectiveness of our proposed method, we conduct experiments on two datasets. The experimental results demonstrate that our approach outperforms other state-of-the-art methods in ultrasound image classification. This indicates the superiority of our pre-training method and its ability to extract discriminative features from ultrasound data, even in the presence of hard examples.
期刊介绍:
The IEEE Open Journal of Engineering in Medicine and Biology (IEEE OJEMB) is dedicated to serving the community of innovators in medicine, technology, and the sciences, with the core goal of advancing the highest-quality interdisciplinary research between these disciplines. The journal firmly believes that the future of medicine depends on close collaboration between biology and technology, and that fostering interaction between these fields is an important way to advance key discoveries that can improve clinical care.IEEE OJEMB is a gold open access journal in which the authors retain the copyright to their papers and readers have free access to the full text and PDFs on the IEEE Xplore® Digital Library. However, authors are required to pay an article processing fee at the time their paper is accepted for publication, using to cover the cost of publication.