Yuanlu Ni , Yang Cong , Chengqian Zhao , Jinhua Yu , Yin Wang , Guohui Zhou , Mengjun Shen
{"title":"基于多增强视图的主动学习,用于肺部超声图像中多种模式的分类。","authors":"Yuanlu Ni , Yang Cong , Chengqian Zhao , Jinhua Yu , Yin Wang , Guohui Zhou , Mengjun Shen","doi":"10.1016/j.compmedimag.2024.102454","DOIUrl":null,"url":null,"abstract":"<div><div>There are several main patterns in lung ultrasound (LUS) images, including A-lines, B-lines, consolidation and pleural effusion. LUS images of healthy lungs typically only exhibit A-lines, while other patterns may emerge or coexist in LUS images associated with different lung diseases. The accurate categorization of these primary patterns is pivotal for effective lung disease screening. However, two challenges complicate the classification task: the first is the inherent blurring of feature differences between main patterns due to ultrasound imaging properties; and the second is the potential coexistence of multiple patterns in a single case, with only the most dominant pattern being clinically annotated. To address these challenges, we propose the active learning based on multi-enhanced views (MEVAL) method to achieve more precise pattern classification in LUS. To accentuate feature differences between multiple patterns, we introduce a feature enhancement module by applying vertical linear fitting and k-means clustering. The multi-enhanced views are then employed in parallel with the original images, thus enhancing MEVAL’s awareness of feature differences between multiple patterns. To tackle the patterns coexistence issue, we propose an active learning strategy based on confidence sets and misclassified sets. This strategy enables the network to simultaneously recognize multiple patterns by selectively labeling of a small number of images. Our dataset comprises 5075 LUS images, with approximately 4% exhibiting multiple patterns. Experimental results showcase the effectiveness of the proposed method in the classification task, with accuracy of 98.72%, AUC of 0.9989, sensitivity of 98.76%, and specificity of 98.16%, which outperforms than the state-of-the-art deep learning-based methods. A series of comprehensive ablation studies suggest the effectiveness of each proposed component and show great potential in clinical application.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102454"},"PeriodicalIF":5.4000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Active learning based on multi-enhanced views for classification of multiple patterns in lung ultrasound images\",\"authors\":\"Yuanlu Ni , Yang Cong , Chengqian Zhao , Jinhua Yu , Yin Wang , Guohui Zhou , Mengjun Shen\",\"doi\":\"10.1016/j.compmedimag.2024.102454\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>There are several main patterns in lung ultrasound (LUS) images, including A-lines, B-lines, consolidation and pleural effusion. LUS images of healthy lungs typically only exhibit A-lines, while other patterns may emerge or coexist in LUS images associated with different lung diseases. The accurate categorization of these primary patterns is pivotal for effective lung disease screening. However, two challenges complicate the classification task: the first is the inherent blurring of feature differences between main patterns due to ultrasound imaging properties; and the second is the potential coexistence of multiple patterns in a single case, with only the most dominant pattern being clinically annotated. To address these challenges, we propose the active learning based on multi-enhanced views (MEVAL) method to achieve more precise pattern classification in LUS. To accentuate feature differences between multiple patterns, we introduce a feature enhancement module by applying vertical linear fitting and k-means clustering. The multi-enhanced views are then employed in parallel with the original images, thus enhancing MEVAL’s awareness of feature differences between multiple patterns. To tackle the patterns coexistence issue, we propose an active learning strategy based on confidence sets and misclassified sets. This strategy enables the network to simultaneously recognize multiple patterns by selectively labeling of a small number of images. Our dataset comprises 5075 LUS images, with approximately 4% exhibiting multiple patterns. Experimental results showcase the effectiveness of the proposed method in the classification task, with accuracy of 98.72%, AUC of 0.9989, sensitivity of 98.76%, and specificity of 98.16%, which outperforms than the state-of-the-art deep learning-based methods. A series of comprehensive ablation studies suggest the effectiveness of each proposed component and show great potential in clinical application.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"118 \",\"pages\":\"Article 102454\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611124001319\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611124001319","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Active learning based on multi-enhanced views for classification of multiple patterns in lung ultrasound images
There are several main patterns in lung ultrasound (LUS) images, including A-lines, B-lines, consolidation and pleural effusion. LUS images of healthy lungs typically only exhibit A-lines, while other patterns may emerge or coexist in LUS images associated with different lung diseases. The accurate categorization of these primary patterns is pivotal for effective lung disease screening. However, two challenges complicate the classification task: the first is the inherent blurring of feature differences between main patterns due to ultrasound imaging properties; and the second is the potential coexistence of multiple patterns in a single case, with only the most dominant pattern being clinically annotated. To address these challenges, we propose the active learning based on multi-enhanced views (MEVAL) method to achieve more precise pattern classification in LUS. To accentuate feature differences between multiple patterns, we introduce a feature enhancement module by applying vertical linear fitting and k-means clustering. The multi-enhanced views are then employed in parallel with the original images, thus enhancing MEVAL’s awareness of feature differences between multiple patterns. To tackle the patterns coexistence issue, we propose an active learning strategy based on confidence sets and misclassified sets. This strategy enables the network to simultaneously recognize multiple patterns by selectively labeling of a small number of images. Our dataset comprises 5075 LUS images, with approximately 4% exhibiting multiple patterns. Experimental results showcase the effectiveness of the proposed method in the classification task, with accuracy of 98.72%, AUC of 0.9989, sensitivity of 98.76%, and specificity of 98.16%, which outperforms than the state-of-the-art deep learning-based methods. A series of comprehensive ablation studies suggest the effectiveness of each proposed component and show great potential in clinical application.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.