{"title":"Investigating the use of signal detection information in supervised learning-based image denoising with consideration of task-shift.","authors":"Kaiyan Li, Hua Li, Mark A Anastasio","doi":"10.1117/1.JMI.11.5.055501","DOIUrl":"10.1117/1.JMI.11.5.055501","url":null,"abstract":"<p><strong>Purpose: </strong>Recently, learning-based denoising methods that incorporate task-relevant information into the training procedure have been developed to enhance the utility of the denoised images. However, this line of research is relatively new and underdeveloped, and some fundamental issues remain unexplored. Our purpose is to yield insights into general issues related to these task-informed methods. This includes understanding the impact of denoising on objective measures of image quality (IQ) when the specified task at inference time is different from that employed for model training, a phenomenon we refer to as \"task-shift.\"</p><p><strong>Approach: </strong>A virtual imaging test bed comprising a stylized computational model of a chest X-ray computed tomography imaging system was employed to enable a controlled and tractable study design. A canonical, fully supervised, convolutional neural network-based denoising method was purposely adopted to understand the underlying issues that may be relevant to a variety of applications and more advanced denoising or image reconstruction methods. Signal detection and signal detection-localization tasks under signal-known-statistically with background-known-statistically conditions were considered, and several distinct types of numerical observers were employed to compute estimates of the task performance. Studies were designed to reveal how a task-informed transfer-learning approach can influence the tradeoff between conventional and task-based measures of image quality within the context of the considered tasks. In addition, the impact of task-shift on these image quality measures was assessed.</p><p><strong>Results: </strong>The results indicated that certain tradeoffs can be achieved such that the resulting AUC value was significantly improved and the degradation of physical IQ measures was statistically insignificant. It was also observed that introducing task-shift degrades the task performance as expected. The degradation was significant when a relatively simple task was considered for network training and observer performance on a more complex one was assessed at inference time.</p><p><strong>Conclusions: </strong>The presented results indicate that the task-informed training method can improve the observer performance while providing control over the tradeoff between traditional and task-based measures of image quality. The behavior of a task-informed model fine-tuning procedure was demonstrated, and the impact of task-shift on task-based image quality measures was investigated.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"055501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11376226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuetong Tao, Warren M Reed, Tong Li, Patrick C Brennan, Ziba Gandomkar
{"title":"Optimizing mammography interpretation education: leveraging deep learning for cohort-specific error detection to enhance radiologist training.","authors":"Xuetong Tao, Warren M Reed, Tong Li, Patrick C Brennan, Ziba Gandomkar","doi":"10.1117/1.JMI.11.5.055502","DOIUrl":"10.1117/1.JMI.11.5.055502","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate interpretation of mammograms presents challenges. Tailoring mammography training to reader profiles holds the promise of an effective strategy to reduce these errors. This proof-of-concept study investigated the feasibility of employing convolutional neural networks (CNNs) with transfer learning to categorize regions associated with false-positive (FP) errors within screening mammograms into categories of \"low\" or \"high\" likelihood of being a false-positive detection for radiologists sharing similar geographic characteristics.</p><p><strong>Approach: </strong>Mammography test sets assessed by two geographically distant cohorts of radiologists (cohorts A and B) were collected. FP patches within these mammograms were segmented and categorized as \"difficult\" or \"easy\" based on the number of readers committing FP errors. Patches outside 1.5 times the interquartile range above the upper quartile were labeled as difficult, whereas the remaining patches were labeled as easy. Using transfer learning, a patch-wise CNN model for binary patch classification was developed utilizing ResNet as the feature extractor, with modified fully connected layers for the target task. Model performance was assessed using 10-fold cross-validation.</p><p><strong>Results: </strong>Compared with other architectures, the transferred ResNet-50 achieved the highest performance, obtaining receiver operating characteristics area under the curve values of 0.933 ( <math><mrow><mo>±</mo> <mn>0.012</mn></mrow> </math> ) and 0.975 ( <math><mrow><mo>±</mo> <mn>0.011</mn></mrow> </math> ) on the validation sets for cohorts A and B, respectively.</p><p><strong>Conclusions: </strong>The findings highlight the feasibility of employing CNN-based transfer learning to predict the difficulty levels of local FP patches in screening mammograms for specific radiologist cohort with similar geographic characteristics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"055502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11447382/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sunwoo Kwak, Hamed Akbari, Jose A Garcia, Suyash Mohan, Yehuda Dicker, Chiharu Sako, Yuji Matsumoto, MacLean P Nasrallah, Mahmoud Shalaby, Donald M O'Rourke, Russel T Shinohara, Fang Liu, Chaitra Badve, Jill S Barnholtz-Sloan, Andrew E Sloan, Matthew Lee, Rajan Jain, Santiago Cepeda, Arnab Chakravarti, Joshua D Palmer, Adam P Dicker, Gaurav Shukla, Adam E Flanders, Wenyin Shi, Graeme F Woodworth, Christos Davatzikos
{"title":"Predicting peritumoral glioblastoma infiltration and subsequent recurrence using deep-learning-based analysis of multi-parametric magnetic resonance imaging.","authors":"Sunwoo Kwak, Hamed Akbari, Jose A Garcia, Suyash Mohan, Yehuda Dicker, Chiharu Sako, Yuji Matsumoto, MacLean P Nasrallah, Mahmoud Shalaby, Donald M O'Rourke, Russel T Shinohara, Fang Liu, Chaitra Badve, Jill S Barnholtz-Sloan, Andrew E Sloan, Matthew Lee, Rajan Jain, Santiago Cepeda, Arnab Chakravarti, Joshua D Palmer, Adam P Dicker, Gaurav Shukla, Adam E Flanders, Wenyin Shi, Graeme F Woodworth, Christos Davatzikos","doi":"10.1117/1.JMI.11.5.054001","DOIUrl":"10.1117/1.JMI.11.5.054001","url":null,"abstract":"<p><strong>Purpose: </strong>Glioblastoma (GBM) is the most common and aggressive primary adult brain tumor. The standard treatment approach is surgical resection to target the enhancing tumor mass, followed by adjuvant chemoradiotherapy. However, malignant cells often extend beyond the enhancing tumor boundaries and infiltrate the peritumoral edema. Traditional supervised machine learning techniques hold potential in predicting tumor infiltration extent but are hindered by the extensive resources needed to generate expertly delineated regions of interest (ROIs) for training models on tissue most and least likely to be infiltrated.</p><p><strong>Approach: </strong>We developed a method combining expert knowledge and training-based data augmentation to automatically generate numerous training examples, enhancing the accuracy of our model for predicting tumor infiltration through predictive maps. Such maps can be used for targeted supra-total surgical resection and other therapies that might benefit from intensive yet well-targeted treatment of infiltrated tissue. We apply our method to preoperative multi-parametric magnetic resonance imaging (mpMRI) scans from a subset of 229 patients of a multi-institutional consortium (Radiomics Signatures for Precision Diagnostics) and test the model on subsequent scans with pathology-proven recurrence.</p><p><strong>Results: </strong>Leave-one-site-out cross-validation was used to train and evaluate the tumor infiltration prediction model using initial pre-surgical scans, comparing the generated prediction maps with follow-up mpMRI scans confirming recurrence through post-resection tissue analysis. Performance was measured by voxel-wised odds ratios (ORs) across six institutions: University of Pennsylvania (OR: 9.97), Ohio State University (OR: 14.03), Case Western Reserve University (OR: 8.13), New York University (OR: 16.43), Thomas Jefferson University (OR: 8.22), and Rio Hortega (OR: 19.48).</p><p><strong>Conclusions: </strong>The proposed model demonstrates that mpMRI analysis using deep learning can predict infiltration in the peri-tumoral brain region for GBM patients without needing to train a model using expert ROI drawings. Results for each institution demonstrate the model's generalizability and reproducibility.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054001"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11363410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning architecture for scatter estimation in cone-beam computed tomography head imaging with varying field-of-measurement settings.","authors":"Harshit Agrawal, Ari Hietanen, Simo Särkkä","doi":"10.1117/1.JMI.11.5.053501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.053501","url":null,"abstract":"<p><strong>Purpose: </strong>X-ray scatter causes considerable degradation in the cone-beam computed tomography (CBCT) image quality. To estimate the scatter, deep learning-based methods have been demonstrated to be effective. Modern CBCT systems can scan a wide range of field-of-measurement (FOM) sizes. Variations in the size of FOM can cause a major shift in the scatter-to-primary ratio in CBCT. However, the scatter estimation performance of deep learning networks has not been extensively evaluated under varying FOMs. Therefore, we train the state-of-the-art scatter estimation neural networks for varying FOMs and develop a method to utilize FOM size information to improve performance.</p><p><strong>Approach: </strong>We used FOM size information as additional features by converting it into two channels and then concatenating it to the encoder of the networks. We compared our approach for a U-Net, Spline-Net, and DSE-Net, by training them with and without the FOM information. We utilized a Monte Carlo-simulated dataset to train the networks on 18 FOM sizes and test on 30 unseen FOM sizes. In addition, we evaluated the models on the water phantoms and real clinical CBCT scans.</p><p><strong>Results: </strong>The simulation study demonstrates that our method reduced average mean-absolute-percentage-error for U-Net by 38%, Spline-Net by 40%, and DSE-net by 33% for the scatter estimation in the 2D projection domain. Furthermore, the root-mean-square error on the 3D reconstructed volumes was improved for U-Net by 43%, Spline-Net by 30%, and DSE-Net by 23%. Furthermore, our method improved contrast and image quality on real datasets such as water phantom and clinical data.</p><p><strong>Conclusion: </strong>Providing additional information about FOM size improves the robustness of the neural networks for scatter estimation. Our approach is not limited to utilizing only FOM size information; more variables such as tube voltage, scanning geometry, and patient size can be added to improve the robustness of a single network.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"053501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11477364/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Demystifying the effect of receptive field size in U-Net models for medical image segmentation.","authors":"Vincent Loos, Rohit Pardasani, Navchetan Awasthi","doi":"10.1117/1.JMI.11.5.054004","DOIUrl":"10.1117/1.JMI.11.5.054004","url":null,"abstract":"<p><strong>Purpose: </strong>Medical image segmentation is a critical task in healthcare applications, and U-Nets have demonstrated promising results in this domain. We delve into the understudied aspect of receptive field (RF) size and its impact on the U-Net and attention U-Net architectures used for medical imaging segmentation.</p><p><strong>Approach: </strong>We explore several critical elements including the relationship among RF size, characteristics of the region of interest, and model performance, as well as the balance between RF size and computational costs for U-Net and attention U-Net methods for different datasets. We also propose a mathematical notation for representing the theoretical receptive field (TRF) of a given layer in a network and propose two new metrics, namely, the effective receptive field (ERF) rate and the object rate, to quantify the fraction of significantly contributing pixels within the ERF against the TRF area and assessing the relative size of the segmentation object compared with the TRF size, respectively.</p><p><strong>Results: </strong>The results demonstrate that there exists an optimal TRF size that successfully strikes a balance between capturing a wider global context and maintaining computational efficiency, thereby optimizing model performance. Interestingly, a distinct correlation is observed between the data complexity and the required TRF size; segmentation based solely on contrast achieved peak performance even with smaller TRF sizes, whereas more complex segmentation tasks necessitated larger TRFs. Attention U-Net models consistently outperformed their U-Net counterparts, highlighting the value of attention mechanisms regardless of TRF size.</p><p><strong>Conclusions: </strong>These insights present an invaluable resource for developing more efficient U-Net-based architectures for medical imaging and pave the way for future exploration of other segmentation architectures. A tool is also developed, which calculates the TRF for a U-Net (and attention U-Net) model and also suggests an appropriate TRF size for a given model and dataset.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054004"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11520766/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raissa Souza, Emma A M Stanley, Vedant Gulve, Jasmine Moore, Chris Kang, Richard Camicioli, Oury Monchi, Zahinoor Ismail, Matthias Wilms, Nils D Forkert
{"title":"HarmonyTM: multi-center data harmonization applied to distributed learning for Parkinson's disease classification.","authors":"Raissa Souza, Emma A M Stanley, Vedant Gulve, Jasmine Moore, Chris Kang, Richard Camicioli, Oury Monchi, Zahinoor Ismail, Matthias Wilms, Nils D Forkert","doi":"10.1117/1.JMI.11.5.054502","DOIUrl":"10.1117/1.JMI.11.5.054502","url":null,"abstract":"<p><strong>Purpose: </strong>Distributed learning is widely used to comply with data-sharing regulations and access diverse datasets for training machine learning (ML) models. The traveling model (TM) is a distributed learning approach that sequentially trains with data from one center at a time, which is especially advantageous when dealing with limited local datasets. However, a critical concern emerges when centers utilize different scanners for data acquisition, which could potentially lead models to exploit these differences as shortcuts. Although data harmonization can mitigate this issue, current methods typically rely on large or paired datasets, which can be impractical to obtain in distributed setups.</p><p><strong>Approach: </strong>We introduced HarmonyTM, a data harmonization method tailored for the TM. HarmonyTM effectively mitigates bias in the model's feature representation while retaining crucial disease-related information, all without requiring extensive datasets. Specifically, we employed adversarial training to \"unlearn\" bias from the features used in the model for classifying Parkinson's disease (PD). We evaluated HarmonyTM using multi-center three-dimensional (3D) neuroimaging datasets from 83 centers using 23 different scanners.</p><p><strong>Results: </strong>Our results show that HarmonyTM improved PD classification accuracy from 72% to 76% and reduced (unwanted) scanner classification accuracy from 53% to 30% in the TM setup.</p><p><strong>Conclusion: </strong>HarmonyTM is a method tailored for harmonizing 3D neuroimaging data within the TM approach, aiming to minimize shortcut learning in distributed setups. This prevents the disease classifier from leveraging scanner-specific details to classify patients with or without PD-a key aspect for deploying ML models for clinical applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11413651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Expanding generalized contrast-to-noise ratio into a clinically relevant measure of lesion detectability by considering size and spatial resolution.","authors":"Siegfried Schlunk, Brett Byram","doi":"10.1117/1.JMI.11.5.057001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.057001","url":null,"abstract":"<p><strong>Purpose: </strong>Early image quality metrics were often designed with clinicians in mind, and ideal metrics would correlate with the subjective opinion of practitioners. Over time, adaptive beamformers and other post-processing methods have become more common, and these newer methods often violate assumptions of earlier image quality metrics, invalidating the meaning of those metrics. The result is that beamformers may \"manipulate\" metrics without producing more clinical information.</p><p><strong>Approach: </strong>In this work, Smith et al.'s signal-to-noise ratio (SNR) metric for lesion detectability is considered, and a more robust version, here called generalized SNR (gSNR), is proposed that uses generalized contrast-to-noise ratio (gCNR) as a core. It is analytically shown that for Rayleigh distributed data, gCNR is a function of Smith et al.'s <math> <mrow><msub><mi>C</mi> <mi>ψ</mi></msub> </mrow> </math> (and therefore can be used as a substitution). More robust methods for estimating the resolution cell size are considered. Simulated lesions are included to verify the equations and demonstrate behavior, and it is shown to apply equally well to <i>in vivo</i> data.</p><p><strong>Results: </strong>gSNR is shown to be equivalent to SNR for delay-and-sum (DAS) beamformed data, as intended. However, it is shown to be more robust against transformations and report lesion detectability more accurately for non-Rayleigh distributed data. In the simulation included, the SNR of DAS was <math><mrow><mn>4.4</mn> <mo>±</mo> <mn>0.8</mn></mrow> </math> , and minimum variance (MV) was <math><mrow><mn>6.4</mn> <mo>±</mo> <mn>1.9</mn></mrow> </math> , but the gSNR of DAS was <math><mrow><mn>4.5</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> , and MV was <math><mrow><mn>3.0</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> , which agrees with the subjective assessment of the image. Likewise, the <math> <mrow><msup><mi>DAS</mi> <mn>2</mn></msup> </mrow> </math> transformation (which is clinically identical to DAS) had an incorrect SNR of <math><mrow><mn>9.4</mn> <mo>±</mo> <mn>1.0</mn></mrow> </math> and a correct gSNR of <math><mrow><mn>4.4</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> . Similar results are shown <i>in vivo</i>.</p><p><strong>Conclusions: </strong>Using gCNR as a component to estimate gSNR creates a robust measure of lesion detectability. Like SNR, gSNR can be compared with the Rose criterion and may better correlate with clinical assessments of image quality for modern beamformers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"057001"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gino E Jansen, Bob D de Vos, Mitchel A Molenaar, Mark J Schuuring, Berto J Bouma, Ivana Išgum
{"title":"Automated echocardiography view classification and quality assessment with recognition of unknown views.","authors":"Gino E Jansen, Bob D de Vos, Mitchel A Molenaar, Mark J Schuuring, Berto J Bouma, Ivana Išgum","doi":"10.1117/1.JMI.11.5.054002","DOIUrl":"10.1117/1.JMI.11.5.054002","url":null,"abstract":"<p><strong>Purpose: </strong>Interpreting echocardiographic exams requires substantial manual interaction as videos lack scan-plane information and have inconsistent image quality, ranging from clinically relevant to unrecognizable. Thus, a manual prerequisite step for analysis is to select the appropriate views that showcase both the target anatomy and optimal image quality. To automate this selection process, we present a method for automatic classification of routine views, recognition of unknown views, and quality assessment of detected views.</p><p><strong>Approach: </strong>We train a neural network for view classification and employ the logit activations from the neural network for unknown view recognition. Subsequently, we train a linear regression algorithm that uses feature embeddings from the neural network to predict view quality scores. We evaluate the method on a clinical test set of 2466 echocardiography videos with expert-annotated view labels and a subset of 438 videos with expert-rated view quality scores. A second observer annotated a subset of 894 videos, including all quality-rated videos.</p><p><strong>Results: </strong>The proposed method achieved an accuracy of <math><mrow><mn>84.9</mn> <mo>%</mo> <mo>±</mo> <mn>0.67</mn></mrow> </math> for the joint objective of routine view classification and unknown view recognition, whereas a second observer reached an accuracy of 87.6%. For view quality assessment, the method achieved a Spearman's rank correlation coefficient of 0.71, whereas a second observer reached a correlation coefficient of 0.62.</p><p><strong>Conclusion: </strong>The proposed method approaches expert-level performance, enabling fully automatic selection of the most appropriate views for manual or automatic downstream analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054002"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nagasoujanya V Annasamudram, Azubuike M Okorie, Richard G Spencer, Rita R Kalyani, Qi Yang, Bennett A Landman, Luigi Ferrucci, Sokratis Makrogiannis
{"title":"Deep network and multi-atlas segmentation fusion for delineation of thigh muscle groups in three-dimensional water-fat separated MRI.","authors":"Nagasoujanya V Annasamudram, Azubuike M Okorie, Richard G Spencer, Rita R Kalyani, Qi Yang, Bennett A Landman, Luigi Ferrucci, Sokratis Makrogiannis","doi":"10.1117/1.JMI.11.5.054003","DOIUrl":"10.1117/1.JMI.11.5.054003","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation is essential for tissue quantification and characterization in studies of aging and age-related and metabolic diseases and the development of imaging biomarkers. We propose a multi-method and multi-atlas methodology for automated segmentation of functional muscle groups in three-dimensional (3D) thigh magnetic resonance images. These groups lie anatomically adjacent to each other, rendering their manual delineation a challenging and time-consuming task.</p><p><strong>Approach: </strong>We introduce a framework for automated segmentation of the four main functional muscle groups of the thigh, gracilis, hamstring, quadriceps femoris, and sartorius, using chemical shift encoded water-fat magnetic resonance imaging (CSE-MRI). We propose fusing anatomical mappings from multiple deformable models with 3D deep learning model-based segmentation. This approach leverages the generalizability of multi-atlas segmentation (MAS) and accuracy of deep networks, hence enabling accurate assessment of volume and fat content of muscle groups.</p><p><strong>Results: </strong>For segmentation performance evaluation, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD-95). We evaluated the proposed framework, its variants, and baseline methods on 15 healthy subjects by threefold cross-validation and tested on four patients. Fusion of multiple atlases, deformable registration models, and deep learning segmentation produced the top performance with an average DSC of 0.859 and HD-95 of 8.34 over all muscles.</p><p><strong>Conclusions: </strong>Fusion of multiple anatomical mappings from multiple MAS techniques enriches the template set and improves the segmentation accuracy. Additional fusion with deep network decisions applied to the subject space offers complementary information. The proposed approach can produce accurate segmentation of individual muscle groups in 3D thigh MRI scans.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054003"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11369361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karen Drukker, Milica Medved, Carla B Harmath, Maryellen L Giger, Obianuju S Madueke-Laveaux
{"title":"Radiomics and quantitative multi-parametric MRI for predicting uterine fibroid growth.","authors":"Karen Drukker, Milica Medved, Carla B Harmath, Maryellen L Giger, Obianuju S Madueke-Laveaux","doi":"10.1117/1.JMI.11.5.054501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.054501","url":null,"abstract":"<p><strong>Significance: </strong>Uterine fibroids (UFs) can pose a serious health risk to women. UFs are benign tumors that vary in clinical presentation from asymptomatic to causing debilitating symptoms. UF management is limited by our inability to predict UF growth rate and future morbidity.</p><p><strong>Aim: </strong>We aim to develop a predictive model to identify UFs with increased growth rates and possible resultant morbidity.</p><p><strong>Approach: </strong>We retrospectively analyzed 44 expertly outlined UFs from 20 patients who underwent two multi-parametric MR imaging exams as part of a prospective study over an average of 16 months. We identified 44 initial features by extracting quantitative magnetic resonance imaging (MRI) features plus morphological and textural radiomics features from DCE, T2, and apparent diffusion coefficient sequences. Principal component analysis reduced dimensionality, with the smallest number of components explaining over 97.5% of the variance selected. Employing a leave-one-fibroid-out scheme, a linear discriminant analysis classifier utilized these components to output a growth risk score.</p><p><strong>Results: </strong>The classifier incorporated the first three principal components and achieved an area under the receiver operating characteristic curve of 0.80 (95% confidence interval [0.69; 0.91]), effectively distinguishing UFs growing faster than the median growth rate of <math><mrow><mn>0.93</mn> <mtext> </mtext> <msup><mrow><mi>cm</mi></mrow> <mrow><mn>3</mn></mrow> </msup> <mo>/</mo> <mi>year</mi> <mo>/</mo> <mi>fibroid</mi></mrow> </math> from slower-growing ones within the cohort. Time-to-event analysis, dividing the cohort based on the median growth risk score, yielded a hazard ratio of 0.33 [0.15; 0.76], demonstrating potential clinical utility.</p><p><strong>Conclusion: </strong>We developed a promising predictive model utilizing quantitative MRI features and principal component analysis to identify UFs with increased growth rates. Furthermore, the model's discrimination ability supports its potential clinical utility in developing tailored patient and fibroid-specific management once validated on a larger cohort.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}