Journal of Medical Imaging最新文献

筛选
英文 中文
LED-based, real-time, hyperspectral imaging device. 基于led,实时,高光谱成像设备。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-06-12 DOI: 10.1117/1.JMI.12.3.035002
Naeeme Modir, Maysam Shahedi, James Dormer, Ling Ma, Baowei Fei
{"title":"LED-based, real-time, hyperspectral imaging device.","authors":"Naeeme Modir, Maysam Shahedi, James Dormer, Ling Ma, Baowei Fei","doi":"10.1117/1.JMI.12.3.035002","DOIUrl":"10.1117/1.JMI.12.3.035002","url":null,"abstract":"<p><strong>Purpose: </strong>This study demonstrates the feasibility of using an LED array for hyperspectral imaging (HSI). The prototype validates the concept and provides insights into the design of future HSI applications. Our goal is to design, develop, and test a real-time, LED-based HSI prototype as a proof-of-principle device for <i>in situ</i> hyperspectral imaging using LEDs.</p><p><strong>Approach: </strong>A prototype was designed based on a multiwavelength LED array and a monochrome camera and was tested to investigate the properties of the LED-based HSI. The LED array consisted of 18 LEDs in 18 different wavelengths from 405 nm to 910 nm. The performance of the imaging system was evaluated on different normal and cancerous <i>ex vivo</i> tissues. The impact of imaging conditions on the HSI quality was investigated. The LED-based HSI device was compared with a reference hyperspectral camera system.</p><p><strong>Results: </strong>The hyperspectral signatures of different imaging targets were acquired using our prototype HSI device, which are comparable to the data obtained using the reference HSI system.</p><p><strong>Conclusions: </strong>The feasibility of employing a spectral LED array as the illumination source for high-speed and high-quality HSI has been demonstrated. The use of LEDs for HSI can open the door to numerous applications in endoscopic, laparoscopic, and handheld HSI devices.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"035002"},"PeriodicalIF":1.7,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mpox lesion counting with semantic and instance segmentation methods. 基于语义和实例分割方法的Mpox病变计数。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-06-19 DOI: 10.1117/1.JMI.12.3.034506
Bohan Jiang, Andrew J McNeil, Yihao Liu, David W House, Placide Mbala-Kingebeni, Olivier Tshiani Mbaya, Tyra Silaphet, Lori E Dodd, Edward W Cowen, Veronique Nussenblatt, Tyler Bonnett, Ziche Chen, Inga Saknite, Benoit M Dawant, Eric R Tkaczyk
{"title":"Mpox lesion counting with semantic and instance segmentation methods.","authors":"Bohan Jiang, Andrew J McNeil, Yihao Liu, David W House, Placide Mbala-Kingebeni, Olivier Tshiani Mbaya, Tyra Silaphet, Lori E Dodd, Edward W Cowen, Veronique Nussenblatt, Tyler Bonnett, Ziche Chen, Inga Saknite, Benoit M Dawant, Eric R Tkaczyk","doi":"10.1117/1.JMI.12.3.034506","DOIUrl":"10.1117/1.JMI.12.3.034506","url":null,"abstract":"<p><strong>Purpose: </strong>Mpox is a viral illness with symptoms similar to smallpox. A key clinical metric to monitor disease progression is the number of skin lesions. Manually counting mpox skin lesions is labor-intensive and susceptible to human error.</p><p><strong>Approach: </strong>We previously developed an mpox lesion counting method based on the UNet segmentation model using 66 photographs from 18 patients. We have compared four additional methods: the instance segmentation methods Mask R-CNN, YOLOv8, and E2EC, in addition to a UNet++ model. We designed a patient-level leave-one-out experiment, assessing their performance using <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> score and lesion count metrics. Finally, we tested whether an ensemble of the networks outperformed any single model.</p><p><strong>Results: </strong>Mask R-CNN model achieved an <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> score of 0.75, YOLOv8 a score of 0.75, E2EC a score of 0.70, UNet++ a score of 0.81, and baseline UNet a score of 0.79. Bland-Altman analysis of lesion count performance showed a limit of agreement (LoA) width of 62.2 for Mask R-CNN, 91.3 for YOLOv8, 94.2 for E2EC, and 62.1 for UNet++, with the baseline UNet model achieving 69.1. The ensemble showed an <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> score performance of 0.78 and LoA width of 67.4.</p><p><strong>Conclusions: </strong>Instance segmentation methods and UNet-based semantic segmentation methods performed equally well in lesion counting. Furthermore, the ensemble of the trained models showed no performance increase over the best-performing model UNet, likely because errors are frequently shared across models. Performance is likely limited by the availability of high-quality photographs for this complex problem, rather than the methodologies used.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034506"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12177574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144369413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving annotation efficiency for fully labeling a breast mass segmentation dataset. 提高乳腺质量分割数据的标注效率。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-05-21 DOI: 10.1117/1.JMI.12.3.035501
Vaibhav Sharma, Alina Jade Barnett, Julia Yang, Sangwook Cheon, Giyoung Kim, Fides Regina Schwartz, Avivah Wang, Neal Hall, Lars Grimm, Chaofan Chen, Joseph Y Lo, Cynthia Rudin
{"title":"Improving annotation efficiency for fully labeling a breast mass segmentation dataset.","authors":"Vaibhav Sharma, Alina Jade Barnett, Julia Yang, Sangwook Cheon, Giyoung Kim, Fides Regina Schwartz, Avivah Wang, Neal Hall, Lars Grimm, Chaofan Chen, Joseph Y Lo, Cynthia Rudin","doi":"10.1117/1.JMI.12.3.035501","DOIUrl":"10.1117/1.JMI.12.3.035501","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer remains a leading cause of death for women. Screening programs are deployed to detect cancer at early stages. One current barrier identified by breast imaging researchers is a shortage of labeled image datasets. Addressing this problem is crucial to improve early detection models. We present an active learning (AL) framework for segmenting breast masses from 2D digital mammography, and we publish labeled data. Our method aims to reduce the input needed from expert annotators to reach a fully labeled dataset.</p><p><strong>Approach: </strong>We create a dataset of 1136 mammographic masses with pixel-wise binary segmentation labels, with the test subset labeled independently by two different teams. With this dataset, we simulate a human annotator within an AL framework to develop and compare AI-assisted labeling methods, using a discriminator model and a simulated oracle to collect acceptable segmentation labels. A UNet model is retrained on these labels, generating new segmentations. We evaluate various oracle heuristics using the percentage of segmentations that the oracle relabels and measure the quality of the proposed labels by evaluating the intersection over union over a validation dataset.</p><p><strong>Results: </strong>Our method reduces expert annotator input by 44%. We present a dataset of 1136 binary segmentation labels approved by board-certified radiologists and make the 143-image validation set public for comparison with other researchers' methods.</p><p><strong>Conclusions: </strong>We demonstrate that AL can significantly improve the efficiency and time-effectiveness of creating labeled mammogram datasets. Our framework facilitates the development of high-quality datasets while minimizing manual effort in the domain of digital mammography.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"035501"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based temporal MR image reconstruction for accelerated interventional imaging during in-bore biopsies. 基于深度学习的颞叶磁共振图像重建,用于管内活检期间加速介入成像。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-06-03 DOI: 10.1117/1.JMI.12.3.035001
Constant R Noordman, Steffan J W Borgers, Martijn F Boomsma, Thomas C Kwee, Marloes M G van der Lees, Christiaan G Overduin, Maarten de Rooij, Derya Yakar, Jurgen J Fütterer, Henkjan J Huisman
{"title":"Deep learning-based temporal MR image reconstruction for accelerated interventional imaging during in-bore biopsies.","authors":"Constant R Noordman, Steffan J W Borgers, Martijn F Boomsma, Thomas C Kwee, Marloes M G van der Lees, Christiaan G Overduin, Maarten de Rooij, Derya Yakar, Jurgen J Fütterer, Henkjan J Huisman","doi":"10.1117/1.JMI.12.3.035001","DOIUrl":"10.1117/1.JMI.12.3.035001","url":null,"abstract":"<p><strong>Purpose: </strong>Interventional MR imaging struggles with speed and efficiency. We aim to accelerate transrectal in-bore MR-guided biopsies for prostate cancer through undersampled image reconstruction and instrument localization by image segmentation.</p><p><strong>Approach: </strong>In this single-center retrospective study, we used 8464 MR 2D multislice scans from 1289 patients undergoing a prostate biopsy to train and test a deep learning-based spatiotemporal MR image reconstruction model and a nnU-Net segmentation model. The dataset was synthetically undersampled using various undersampling rates ( <math><mrow><mi>R</mi> <mo>=</mo> <mn>8</mn></mrow> </math> , 16, 25, 32). An annotated, unseen subset of these data was used to compare our model with a nontemporal model and readers in a reader study involving seven radiologists from three centers based in the Netherlands. We assessed a maximum noninferior undersampling rate using instrument prediction success rate and instrument tip position (ITP) error.</p><p><strong>Results: </strong>The maximum noninferior undersampling rate is 16-times for the temporal model (ITP error: 2.28 mm, 95% CI: 1.68 to 3.31, mean difference from reference standard: 0.63 mm, <math><mrow><mi>P</mi> <mo>=</mo> <mo>.</mo> <mn>09</mn></mrow> </math> ), whereas a nontemporal model could not produce noninferior image reconstructions comparable to our reference standard. Furthermore, the nontemporal model (ITP error: 6.27 mm, 95% CI: 3.90 to 9.07) and readers (ITP error: 6.87 mm, 95% CI: 6.38 to 7.40) had low instrument prediction success rates (46% and 60%, respectively) compared with the temporal model's 95%.</p><p><strong>Conclusion: </strong>Deep learning-based spatiotemporal MR image reconstruction can improve time-critical intervention tasks such as instrument tracking. We found 16 times undersampling as the maximum noninferior acceleration where image quality is preserved, ITP error is minimized, and the instrument prediction success rate is maximized.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"035001"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12131189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144227256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DECE-Net: a dual-path encoder network with contour enhancement for pneumonia lesion segmentation. DECE-Net:一种具有轮廓增强的双路径编码器网络,用于肺炎病灶分割。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-05-23 DOI: 10.1117/1.JMI.12.3.034503
Tianyang Wang, Xiumei Li, Ruyu Liu, Meixi Wang, Junmei Sun
{"title":"DECE-Net: a dual-path encoder network with contour enhancement for pneumonia lesion segmentation.","authors":"Tianyang Wang, Xiumei Li, Ruyu Liu, Meixi Wang, Junmei Sun","doi":"10.1117/1.JMI.12.3.034503","DOIUrl":"10.1117/1.JMI.12.3.034503","url":null,"abstract":"<p><strong>Purpose: </strong>Early-stage pneumonia is not easily detected, leading to many patients missing the optimal treatment window. This is because segmenting lesion areas from CT images presents several challenges, including low-intensity contrast between the lesion and normal areas, as well as variations in the shape and size of lesion areas. To overcome these challenges, we propose a segmentation network called DECE-Net to segment the pneumonia lesions from CT images automatically.</p><p><strong>Approach: </strong>The DECE-Net adds an extra encoder path to the U-Net, where one encoder path extracts the features of the original CT image with the attention multi-scale feature fusion module, and the other encoder path extracts the contour features in the CT contour image with the contour feature extraction module to compensate and enhance the boundary information that is lost in the downsampling process. The network further fuses the low-level features from both encoder paths through the feature fusion attention connection module and connects them to the upsampled high-level features to replace the skip connections in the U-Net. Finally, multi-point deep supervision is applied to the segmentation results at each scale to improve segmentation accuracy.</p><p><strong>Results: </strong>We evaluate the DECE-Net using four public COVID-19 segmentation datasets. The mIoU results for the four datasets are 80.76%, 84.59%, 84.41%, and 78.55%, respectively.</p><p><strong>Conclusions: </strong>The experimental results indicate that the proposed DECE-Net achieves state-of-the-art performance, especially in the precise segmentation of small lesion areas.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034503"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12101900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional variational auto-encoder and vision transformer hybrid approach for enhanced early Alzheimer's detection. 卷积变分自编码器和视觉变压器混合方法增强早期阿尔茨海默病的检测。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-05-21 DOI: 10.1117/1.JMI.12.3.034501
Harshani Fonseka, Soheil Varastehpour, Masoud Shakiba, Ehsan Golkar, David Tien
{"title":"Convolutional variational auto-encoder and vision transformer hybrid approach for enhanced early Alzheimer's detection.","authors":"Harshani Fonseka, Soheil Varastehpour, Masoud Shakiba, Ehsan Golkar, David Tien","doi":"10.1117/1.JMI.12.3.034501","DOIUrl":"10.1117/1.JMI.12.3.034501","url":null,"abstract":"<p><strong>Purpose: </strong>Alzheimer's disease (AD) is becoming more prevalent among the elderly, with projections indicating that it will affect a significantly large population in the future. Regardless of substantial research efforts and investments focused on exploring the underlying biological factors, a definitive cure has yet to be discovered. The currently available treatments are only effective in slowing disease progression if it is identified in the early stages of the disease. Therefore, early diagnosis has become critical in treating AD.</p><p><strong>Approach: </strong>Recently, the use of deep learning techniques has demonstrated remarkable improvement in enhancing the precision and speed of automatic AD diagnosis through medical image analysis. We propose a hybrid model that integrates a convolutional variational auto-encoder (CVAE) with a vision transformer (ViT). During the encoding phase, the CVAE captures key features from the MRI scans, whereas the decoding phase reduces irrelevant details in MRIs. These refined inputs enhance the ViT's ability to analyze complex patterns through its multihead attention mechanism.</p><p><strong>Results: </strong>The model was trained and evaluated using 14,000 structural MRI samples from the ADNI and SCAN databases. Compared with three benchmark methods and previous studies with Alzheimer's classification techniques, our approach achieved a significant improvement, with a test accuracy of 93.3%.</p><p><strong>Conclusions: </strong>Through this research, we identified the potential of the CVAE-ViT hybrid approach in detecting minor structural abnormalities related to AD. Integrating unsupervised feature extraction via CVAE can significantly enhance transformer-based models in distinguishing between stages of cognitive impairment, thereby identifying early indicators of AD.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034501"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highly efficient homomorphic encryption-based federated learning for diabetic retinopathy classification. 基于同态加密的高效糖尿病视网膜病变分类联合学习。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-06-02 DOI: 10.1117/1.JMI.12.3.034504
Christopher Nielsen, Matthias Wilms, Nils D Forkert
{"title":"Highly efficient homomorphic encryption-based federated learning for diabetic retinopathy classification.","authors":"Christopher Nielsen, Matthias Wilms, Nils D Forkert","doi":"10.1117/1.JMI.12.3.034504","DOIUrl":"10.1117/1.JMI.12.3.034504","url":null,"abstract":"<p><strong>Purpose: </strong>Diabetic retinopathy (DR) is the leading cause of blindness among working-age adults globally. Although machine learning (ML) has shown promise for DR diagnosis, ensuring model generalizability requires training on data from diverse populations. Federated learning (FL) offers a potential solution by enabling model training on decentralized datasets. However, privacy concerns persist in FL due to potential privacy breaches, such as gradient inversion attacks, which can be used to reconstruct sensitive training data and may discourage participation from patients.</p><p><strong>Approach: </strong>We developed and tested a computationally efficient FL framework that integrates homomorphic encryption (HE) to safeguard patient privacy using 6457 retinal fundus images from the APTOS-2019 and ODIR-5K datasets. First, features are extracted from distributed fundus images using RETFound, a large pretrained foundation model for retinal analysis. These encrypted features are then used to train a lightweight multiclass logistic regression head (MLRH) model for DR grade classification using FL.</p><p><strong>Results: </strong>Experimental results show that the MLRH model trained using FL achieves similar performance compared with a fully fine-tuned RETFound model on centralized data, with the area under the receiver operating characteristic curve scores of <math><mrow><mn>0.93</mn> <mo>±</mo> <mn>0.01</mn></mrow> </math> on APTOS-2019 and <math><mrow><mn>0.78</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> on ODIR-5K. Efficiency improvements include a 95.9-fold reduction in computation time and a 63.0-fold reduction in data transfer needs compared with fine-tuning the full RETFound model with FL. In addition, results showed that integrating HE effectively protects patient data against gradient inversion attacks.</p><p><strong>Conclusions: </strong>We advance privacy-preserving, ML-based DR screening technology, supporting the goal of equitable vision care worldwide.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034504"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12128631/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying chronic obstructive pulmonary disease status using computed tomography imaging and convolutional neural networks: comparison of model input image types and training data severity. 使用计算机断层成像和卷积神经网络对慢性阻塞性肺疾病状态进行分类:模型输入图像类型和训练数据严重程度的比较
IF 1.9
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-05-22 DOI: 10.1117/1.JMI.12.3.034502
Sara Rezvanjou, Amir Moslemi, Samuel Peterson, Wan-Cheng Tan, James C Hogg, Jean Bourbeau, Joseph M Reinhardt, Miranda Kirby
{"title":"Classifying chronic obstructive pulmonary disease status using computed tomography imaging and convolutional neural networks: comparison of model input image types and training data severity.","authors":"Sara Rezvanjou, Amir Moslemi, Samuel Peterson, Wan-Cheng Tan, James C Hogg, Jean Bourbeau, Joseph M Reinhardt, Miranda Kirby","doi":"10.1117/1.JMI.12.3.034502","DOIUrl":"10.1117/1.JMI.12.3.034502","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Convolutional neural network (CNN)-based models using computed tomography images can classify chronic obstructive pulmonary disease (COPD) with high performance, but various input image types have been investigated, and it is unclear what image types are optimal. We propose a 2D airway-optimized topological multiplanar reformat (tMPR) input image and compare its performance with established 2D/3D input image types for COPD classification. As a secondary aim, we examined the impact of training on a dataset with predominantly mild COPD cases and testing on a more severe dataset to assess whether it improves generalizability.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;CanCOLD study participants were used for training/internal testing; SPIROMICS participants were used for external testing. Several 2D/3D input image types were adapted from the literature. In the proposed models, 2D airway-optimized tMPR images (to convey shape and interior/contextual information) and 3D output fusion of axial/sagittal/coronal images were investigated. The area-under-the-receiver-operator-curve (AUC) was used to evaluate model performance and Brier scores were used to evaluate model calibration. To further examine how training dataset severity impacts generalization, we compared model performance when trained on the milder CanCOLD dataset versus the more severe SPIROMICS dataset, and vice versa.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;A total of &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;742&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; CanCOLD participants were used for training/validation and &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;309&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; for testing; &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;448&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; SPIROMICS participants were used for external testing. For the CanCOLD and SPIROMICS test set, the proposed 2D tMPR on its own (CanCOLD: &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;AUC&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.79&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ; SPIROMICS: &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;AUC&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.94&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ) and combined with the 3D axial/coronal/sagittal lung view (CanCOLD: &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;AUC&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.82&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ; SPIROMICS: &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;AUC&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.93&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ) had the highest performance. The combined 2D tMPR and 3D axial/coronal/sagittal lung view had the lowest Brier score (CanCOLD: score = 0.16; SPIROMICS: score = 0.24). Conversely, using SPIROMICS for training/testing and CanCOLD for external testing resulted in lower performance when tested on CanCOLD for 2D tMPR on its own (SPIROMICS: AUC = 0.92; CanCOLD: AUC = 0.74) and when combined with the 3D axial/coronal/sagittal lung view (SPIROMICS: &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;AUC&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.92&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ; CanCOLD: &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;AUC&lt;/mi&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.75&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ).&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;The CNN-based model with the combined 2D tMPR images and 3D lung view as input image types had the highest performance for COPD classification, highlighting the imp","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034502"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12097752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image database with slides prepared by the Ziehl-Neelsen method for training automated detection and counting systems for tuberculosis bacilli. 用Ziehl-Neelsen方法制作的图像数据库,用于训练结核杆菌的自动检测和计数系统。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-05-01 Epub Date: 2025-06-13 DOI: 10.1117/1.JMI.12.3.034505
João Victor Boechat Gomide, Thales Francisco Mota Carvalho, Élida Aparecida Leal, Lida Jouca de Assis Figueiredo, Nauhara Vieira de Castro Barroso, Júnia Pessoa Tarabal, Cláudio José Augusto
{"title":"Image database with slides prepared by the Ziehl-Neelsen method for training automated detection and counting systems for tuberculosis bacilli.","authors":"João Victor Boechat Gomide, Thales Francisco Mota Carvalho, Élida Aparecida Leal, Lida Jouca de Assis Figueiredo, Nauhara Vieira de Castro Barroso, Júnia Pessoa Tarabal, Cláudio José Augusto","doi":"10.1117/1.JMI.12.3.034505","DOIUrl":"10.1117/1.JMI.12.3.034505","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to provide a robust dataset for training automated systems to detect tuberculosis bacilli using Ziehl-Neelsen stained slides. By making this dataset available, a critical gap in the availability of public datasets that can be used for developing and testing artificial intelligence techniques for tuberculosis diagnosis is addressed. Our rationale is grounded in the urgent need for diagnostic tools that can enhance tuberculosis diagnosis quickly and efficiently, especially in resource-limited settings.</p><p><strong>Approach: </strong>The Ziehl-Neelsen method was used to prepare 362 slides, which were manually read. According to the World Health Organization's guidelines for performing bacilloscopy for tuberculosis diagnosis, experts annotated each slide to diagnose it as negative or positive. In addition, selected images underwent a detailed annotation process aimed at pinpointing the location of each bacillus and cluster within each image.</p><p><strong>Results: </strong>The database consists of three directories. The first contains all the images, separated by slide, and indicates whether it is negative or the number of crosses if positive, for each slide. The second directory contains the 502 images selected for training automated systems, with each bacillus's position annotated and the Python code used. All the image fragments (positive and negative patches) used in the models' training, validation, and testing stages are available in the third directory.</p><p><strong>Conclusions: </strong>The development of this annotated image database represents a significant advancement in tuberculosis diagnosis. By providing a high-quality and accessible resource to the scientific community, we enhance existing diagnostic tools and facilitate the development of automated technologies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034505"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12163626/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAM-MedUS: a foundational model for universal ultrasound image segmentation. SAM-MedUS:通用超声图像分割的基础模型。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-02-27 DOI: 10.1117/1.JMI.12.2.027001
Feng Tian, Jintao Zhai, Jinru Gong, Weirui Lei, Shuai Chang, Fangfang Ju, Shengyou Qian, Xiao Zou
{"title":"SAM-MedUS: a foundational model for universal ultrasound image segmentation.","authors":"Feng Tian, Jintao Zhai, Jinru Gong, Weirui Lei, Shuai Chang, Fangfang Ju, Shengyou Qian, Xiao Zou","doi":"10.1117/1.JMI.12.2.027001","DOIUrl":"10.1117/1.JMI.12.2.027001","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of ultrasound images for medical diagnosis, monitoring, and research is crucial, and although existing methods perform well, they are limited by specific organs, tumors, and image devices. Applications of the Segment Anything Model (SAM), such as SAM-med2d, use a large number of medical datasets that contain only a small fraction of the ultrasound medical images.</p><p><strong>Approach: </strong>In this work, we proposed a SAM-MedUS model for generic ultrasound image segmentation that utilizes the latest publicly available ultrasound image dataset to create a diverse dataset containing eight site categories for training and testing. We integrated ConvNext V2 and CM blocks in the encoder for better global context extraction. In addition, a boundary loss function is used to improve the segmentation of fuzzy boundaries and low-contrast ultrasound images.</p><p><strong>Results: </strong>Experimental results show that SAM-MedUS outperforms recent methods on multiple ultrasound datasets. For the more easily datasets such as the adult kidney, it achieves 87.93% IoU and 93.58% dice, whereas for more complex ones such as the infant vein, IoU and dice reach 62.31% and 78.93%, respectively.</p><p><strong>Conclusions: </strong>We collected and collated an ultrasound dataset of multiple different site types to achieve uniform segmentation of ultrasound images. In addition, the use of additional auxiliary branches ConvNext V2 and CM block enhances the ability of the model to extract global information and the use of boundary loss allows the model to exhibit robust performance and excellent generalization ability.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"027001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11865838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信