Journal of Medical Imaging最新文献

筛选
英文 中文
HarmonyTM: multi-center data harmonization applied to distributed learning for Parkinson's disease classification. HarmonyTM:多中心数据协调应用于帕金森病分类的分布式学习。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-09-01 Epub Date: 2024-09-20 DOI: 10.1117/1.JMI.11.5.054502
Raissa Souza, Emma A M Stanley, Vedant Gulve, Jasmine Moore, Chris Kang, Richard Camicioli, Oury Monchi, Zahinoor Ismail, Matthias Wilms, Nils D Forkert
{"title":"HarmonyTM: multi-center data harmonization applied to distributed learning for Parkinson's disease classification.","authors":"Raissa Souza, Emma A M Stanley, Vedant Gulve, Jasmine Moore, Chris Kang, Richard Camicioli, Oury Monchi, Zahinoor Ismail, Matthias Wilms, Nils D Forkert","doi":"10.1117/1.JMI.11.5.054502","DOIUrl":"10.1117/1.JMI.11.5.054502","url":null,"abstract":"<p><strong>Purpose: </strong>Distributed learning is widely used to comply with data-sharing regulations and access diverse datasets for training machine learning (ML) models. The traveling model (TM) is a distributed learning approach that sequentially trains with data from one center at a time, which is especially advantageous when dealing with limited local datasets. However, a critical concern emerges when centers utilize different scanners for data acquisition, which could potentially lead models to exploit these differences as shortcuts. Although data harmonization can mitigate this issue, current methods typically rely on large or paired datasets, which can be impractical to obtain in distributed setups.</p><p><strong>Approach: </strong>We introduced HarmonyTM, a data harmonization method tailored for the TM. HarmonyTM effectively mitigates bias in the model's feature representation while retaining crucial disease-related information, all without requiring extensive datasets. Specifically, we employed adversarial training to \"unlearn\" bias from the features used in the model for classifying Parkinson's disease (PD). We evaluated HarmonyTM using multi-center three-dimensional (3D) neuroimaging datasets from 83 centers using 23 different scanners.</p><p><strong>Results: </strong>Our results show that HarmonyTM improved PD classification accuracy from 72% to 76% and reduced (unwanted) scanner classification accuracy from 53% to 30% in the TM setup.</p><p><strong>Conclusion: </strong>HarmonyTM is a method tailored for harmonizing 3D neuroimaging data within the TM approach, aiming to minimize shortcut learning in distributed setups. This prevents the disease classifier from leveraging scanner-specific details to classify patients with or without PD-a key aspect for deploying ML models for clinical applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11413651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated echocardiography view classification and quality assessment with recognition of unknown views. 自动超声心动图视图分类和质量评估,可识别未知视图。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-09-01 Epub Date: 2024-08-30 DOI: 10.1117/1.JMI.11.5.054002
Gino E Jansen, Bob D de Vos, Mitchel A Molenaar, Mark J Schuuring, Berto J Bouma, Ivana Išgum
{"title":"Automated echocardiography view classification and quality assessment with recognition of unknown views.","authors":"Gino E Jansen, Bob D de Vos, Mitchel A Molenaar, Mark J Schuuring, Berto J Bouma, Ivana Išgum","doi":"10.1117/1.JMI.11.5.054002","DOIUrl":"10.1117/1.JMI.11.5.054002","url":null,"abstract":"<p><strong>Purpose: </strong>Interpreting echocardiographic exams requires substantial manual interaction as videos lack scan-plane information and have inconsistent image quality, ranging from clinically relevant to unrecognizable. Thus, a manual prerequisite step for analysis is to select the appropriate views that showcase both the target anatomy and optimal image quality. To automate this selection process, we present a method for automatic classification of routine views, recognition of unknown views, and quality assessment of detected views.</p><p><strong>Approach: </strong>We train a neural network for view classification and employ the logit activations from the neural network for unknown view recognition. Subsequently, we train a linear regression algorithm that uses feature embeddings from the neural network to predict view quality scores. We evaluate the method on a clinical test set of 2466 echocardiography videos with expert-annotated view labels and a subset of 438 videos with expert-rated view quality scores. A second observer annotated a subset of 894 videos, including all quality-rated videos.</p><p><strong>Results: </strong>The proposed method achieved an accuracy of <math><mrow><mn>84.9</mn> <mo>%</mo> <mo>±</mo> <mn>0.67</mn></mrow> </math> for the joint objective of routine view classification and unknown view recognition, whereas a second observer reached an accuracy of 87.6%. For view quality assessment, the method achieved a Spearman's rank correlation coefficient of 0.71, whereas a second observer reached a correlation coefficient of 0.62.</p><p><strong>Conclusion: </strong>The proposed method approaches expert-level performance, enabling fully automatic selection of the most appropriate views for manual or automatic downstream analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep network and multi-atlas segmentation fusion for delineation of thigh muscle groups in three-dimensional water-fat separated MRI. 融合深度网络和多图谱分割技术,在三维水脂分离磁共振成像中划分大腿肌肉群。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-09-01 Epub Date: 2024-09-03 DOI: 10.1117/1.JMI.11.5.054003
Nagasoujanya V Annasamudram, Azubuike M Okorie, Richard G Spencer, Rita R Kalyani, Qi Yang, Bennett A Landman, Luigi Ferrucci, Sokratis Makrogiannis
{"title":"Deep network and multi-atlas segmentation fusion for delineation of thigh muscle groups in three-dimensional water-fat separated MRI.","authors":"Nagasoujanya V Annasamudram, Azubuike M Okorie, Richard G Spencer, Rita R Kalyani, Qi Yang, Bennett A Landman, Luigi Ferrucci, Sokratis Makrogiannis","doi":"10.1117/1.JMI.11.5.054003","DOIUrl":"10.1117/1.JMI.11.5.054003","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation is essential for tissue quantification and characterization in studies of aging and age-related and metabolic diseases and the development of imaging biomarkers. We propose a multi-method and multi-atlas methodology for automated segmentation of functional muscle groups in three-dimensional (3D) thigh magnetic resonance images. These groups lie anatomically adjacent to each other, rendering their manual delineation a challenging and time-consuming task.</p><p><strong>Approach: </strong>We introduce a framework for automated segmentation of the four main functional muscle groups of the thigh, gracilis, hamstring, quadriceps femoris, and sartorius, using chemical shift encoded water-fat magnetic resonance imaging (CSE-MRI). We propose fusing anatomical mappings from multiple deformable models with 3D deep learning model-based segmentation. This approach leverages the generalizability of multi-atlas segmentation (MAS) and accuracy of deep networks, hence enabling accurate assessment of volume and fat content of muscle groups.</p><p><strong>Results: </strong>For segmentation performance evaluation, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD-95). We evaluated the proposed framework, its variants, and baseline methods on 15 healthy subjects by threefold cross-validation and tested on four patients. Fusion of multiple atlases, deformable registration models, and deep learning segmentation produced the top performance with an average DSC of 0.859 and HD-95 of 8.34 over all muscles.</p><p><strong>Conclusions: </strong>Fusion of multiple anatomical mappings from multiple MAS techniques enriches the template set and improves the segmentation accuracy. Additional fusion with deep network decisions applied to the subject space offers complementary information. The proposed approach can produce accurate segmentation of individual muscle groups in 3D thigh MRI scans.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11369361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomics and quantitative multi-parametric MRI for predicting uterine fibroid growth. 用于预测子宫肌瘤生长的放射组学和定量多参数磁共振成像。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-09-01 Epub Date: 2024-09-12 DOI: 10.1117/1.JMI.11.5.054501
Karen Drukker, Milica Medved, Carla B Harmath, Maryellen L Giger, Obianuju S Madueke-Laveaux
{"title":"Radiomics and quantitative multi-parametric MRI for predicting uterine fibroid growth.","authors":"Karen Drukker, Milica Medved, Carla B Harmath, Maryellen L Giger, Obianuju S Madueke-Laveaux","doi":"10.1117/1.JMI.11.5.054501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.054501","url":null,"abstract":"<p><strong>Significance: </strong>Uterine fibroids (UFs) can pose a serious health risk to women. UFs are benign tumors that vary in clinical presentation from asymptomatic to causing debilitating symptoms. UF management is limited by our inability to predict UF growth rate and future morbidity.</p><p><strong>Aim: </strong>We aim to develop a predictive model to identify UFs with increased growth rates and possible resultant morbidity.</p><p><strong>Approach: </strong>We retrospectively analyzed 44 expertly outlined UFs from 20 patients who underwent two multi-parametric MR imaging exams as part of a prospective study over an average of 16 months. We identified 44 initial features by extracting quantitative magnetic resonance imaging (MRI) features plus morphological and textural radiomics features from DCE, T2, and apparent diffusion coefficient sequences. Principal component analysis reduced dimensionality, with the smallest number of components explaining over 97.5% of the variance selected. Employing a leave-one-fibroid-out scheme, a linear discriminant analysis classifier utilized these components to output a growth risk score.</p><p><strong>Results: </strong>The classifier incorporated the first three principal components and achieved an area under the receiver operating characteristic curve of 0.80 (95% confidence interval [0.69; 0.91]), effectively distinguishing UFs growing faster than the median growth rate of <math><mrow><mn>0.93</mn> <mtext>  </mtext> <msup><mrow><mi>cm</mi></mrow> <mrow><mn>3</mn></mrow> </msup> <mo>/</mo> <mi>year</mi> <mo>/</mo> <mi>fibroid</mi></mrow> </math> from slower-growing ones within the cohort. Time-to-event analysis, dividing the cohort based on the median growth risk score, yielded a hazard ratio of 0.33 [0.15; 0.76], demonstrating potential clinical utility.</p><p><strong>Conclusion: </strong>We developed a promising predictive model utilizing quantitative MRI features and principal component analysis to identify UFs with increased growth rates. Furthermore, the model's discrimination ability supports its potential clinical utility in developing tailored patient and fibroid-specific management once validated on a larger cohort.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projected pooling loss for red nucleus segmentation with soft topology constraints. 利用软拓扑约束进行红核分割的投影集合损失。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-07-09 DOI: 10.1117/1.JMI.11.4.044002
Guanghui Fu, Rosana El Jurdi, Lydia Chougar, Didier Dormont, Romain Valabregue, Stéphane Lehéricy, Olivier Colliot
{"title":"Projected pooling loss for red nucleus segmentation with soft topology constraints.","authors":"Guanghui Fu, Rosana El Jurdi, Lydia Chougar, Didier Dormont, Romain Valabregue, Stéphane Lehéricy, Olivier Colliot","doi":"10.1117/1.JMI.11.4.044002","DOIUrl":"10.1117/1.JMI.11.4.044002","url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning is the standard for medical image segmentation. However, it may encounter difficulties when the training set is small. Also, it may generate anatomically aberrant segmentations. Anatomical knowledge can be potentially useful as a constraint in deep learning segmentation methods. We propose a loss function based on projected pooling to introduce soft topological constraints. Our main application is the segmentation of the red nucleus from quantitative susceptibility mapping (QSM) which is of interest in parkinsonian syndromes.</p><p><strong>Approach: </strong>This new loss function introduces soft constraints on the topology by magnifying small parts of the structure to segment to avoid that they are discarded in the segmentation process. To that purpose, we use projection of the structure onto the three planes and then use a series of MaxPooling operations with increasing kernel sizes. These operations are performed both for the ground truth and the prediction and the difference is computed to obtain the loss function. As a result, it can reduce topological errors as well as defects in the structure boundary. The approach is easy to implement and computationally efficient.</p><p><strong>Results: </strong>When applied to the segmentation of the red nucleus from QSM data, the approach led to a very high accuracy (Dice 89.9%) and no topological errors. Moreover, the proposed loss function improved the Dice accuracy over the baseline when the training set was small. We also studied three tasks from the medical segmentation decathlon challenge (MSD) (heart, spleen, and hippocampus). For the MSD tasks, the Dice accuracies were similar for both approaches but the topological errors were reduced.</p><p><strong>Conclusions: </strong>We propose an effective method to automatically segment the red nucleus which is based on a new loss for introducing topology constraints in deep learning segmentation.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232703/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pulmonary nodule detection in low dose computed tomography using a medical-to-medical transfer learning approach. 利用医疗到医疗的迁移学习方法在低剂量计算机断层扫描中检测肺结节。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-07-09 DOI: 10.1117/1.JMI.11.4.044502
Jenita Manokaran, Richa Mittal, Eranga Ukwatta
{"title":"Pulmonary nodule detection in low dose computed tomography using a medical-to-medical transfer learning approach.","authors":"Jenita Manokaran, Richa Mittal, Eranga Ukwatta","doi":"10.1117/1.JMI.11.4.044502","DOIUrl":"10.1117/1.JMI.11.4.044502","url":null,"abstract":"<p><strong>Purpose: </strong>Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs.</p><p><strong>Approach: </strong>In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing.</p><p><strong>Results: </strong>The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a <math><mrow><mi>p</mi></mrow> </math> -value of 0.0054 for precision and a <math><mrow><mi>p</mi></mrow> </math> -value of 0.00034 for specificity.</p><p><strong>Conclusions: </strong>In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, re","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring single-shot propagation and speckle based phase recovery techniques for object thickness estimation by using a polychromatic X-ray laboratory source. 探索利用多色 X 射线实验室光源进行物体厚度估算的单发传播和基于斑点的相位恢复技术。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-07-25 DOI: 10.1117/1.JMI.11.4.043501
Diego Rosich, Margarita Chevalier, Adrián Belarra, Tatiana Alieva
{"title":"Exploring single-shot propagation and speckle based phase recovery techniques for object thickness estimation by using a polychromatic X-ray laboratory source.","authors":"Diego Rosich, Margarita Chevalier, Adrián Belarra, Tatiana Alieva","doi":"10.1117/1.JMI.11.4.043501","DOIUrl":"10.1117/1.JMI.11.4.043501","url":null,"abstract":"<p><strong>Purpose: </strong>Propagation and speckle-based techniques allow reconstruction of the phase of an X-ray beam with a simple experimental setup. Furthermore, their implementation is feasible using low-coherence laboratory X-ray sources. We investigate different approaches to include X-ray polychromaticity for sample thickness recovery using such techniques.</p><p><strong>Approach: </strong>Single-shot Paganin (PT) and Arhatari (AT) propagation-based and speckle-based (ST) techniques are considered. The radiation beam polychromaticity is addressed using three different averaging approaches. The emission-detection process is considered for modulating the X-ray beam spectrum. Reconstructed thickness of three nylon-6 fibers with diameters in the millimeter-range, placed at various object-detector distances are analyzed. In addition, the thickness of an in-house made breast phantom is recovered by using multi-material Paganin's technique (MPT) and compared with micro-CT data.</p><p><strong>Results: </strong>The best quantitative result is obtained for the PT and ST combined with sample thickness averaging (TA) approach that involves individual thickness recovery for each X-ray spectral component and the smallest considered object-detector distance. The error in the recovered fiber diameters for both techniques is <math><mrow><mo><</mo> <mn>4</mn> <mo>%</mo></mrow> </math> , despite the higher noise level in ST images. All cases provide estimates of fiber diameter ratios with an error of 3% with respect to the nominal diameter ratios. The breast phantom thickness difference between MPT-TA and micro-CT is about 10%.</p><p><strong>Conclusions: </strong>We demonstrate the single-shot PT-TA and ST-TA techniques feasibility for thickness recovery of millimeter-sized samples using polychromatic microfocus X-ray sources. The application of MPT-TA for thicker and multi-material samples is promising.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11272094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141789482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of image quality and establishment of local acceptable quality dose for computed tomography based on patient effective diameter. 根据患者有效直径评估图像质量并确定计算机断层扫描的局部可接受质量剂量。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-08-16 DOI: 10.1117/1.JMI.11.4.043502
Nada Hasan, Chadia Rizk, Fatema Marzooq, Khalid Khan, Maryam AlKhaja, Esameldeen Babikir
{"title":"Assessment of image quality and establishment of local acceptable quality dose for computed tomography based on patient effective diameter.","authors":"Nada Hasan, Chadia Rizk, Fatema Marzooq, Khalid Khan, Maryam AlKhaja, Esameldeen Babikir","doi":"10.1117/1.JMI.11.4.043502","DOIUrl":"10.1117/1.JMI.11.4.043502","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to develop modified clinical indication (CI)-based image quality scoring criteria (IQSC) for assessing image quality (IQ) and establishing acceptable quality doses (AQDs) in adult computed tomography (CT) examinations, based on CIs and patient sizes.</p><p><strong>Approach: </strong>CT images, volume CT dose index ( <math> <mrow> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> ), and dose length product (DLP) were collected retrospectively between September 2020 and September 2021 for eight common CIs from two CT scanners at a central hospital in the Kingdom of Bahrain. Using the modified CI-based IQSC and a Likert scale (0 to 4), three radiologists assessed the IQ of each examination. AQDs were then established as the median value of <math> <mrow> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> and DLP for images with an average score of 3 and compared to national diagnostic reference levels (NDRLs).</p><p><strong>Results: </strong>Out of 581 examinations, 60 were excluded from the study due to average scores above or below 3. The established AQDs were lower than the NDRLs for all CIs, except <math><mrow><mi>AQDs</mi> <mo>/</mo> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> for oncologic follow-up for large patients (28 versus 26 mGy) in scanner A, besides abdominal pain for medium patients (16 versus 15 mGy) and large patients (34 versus 27 mGy), and diverticulitis/appendicitis for medium patients (15 versus 12 mGy) and large patients (33 versus 30 mGy) in scanner B, indicating the need for optimization.</p><p><strong>Conclusions: </strong>CI-based IQSC is crucial for IQ assessment and establishing AQDs according to patient size. It identifies stations requiring optimization of patient radiation exposure.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11328147/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of reporting templates for chest radiographs in a coronavirus disease 2019 context: measuring concordance of radiologists with three international templates. 2019 年冠状病毒疾病背景下胸片报告模板的使用:衡量放射科医生与三种国际模板的一致性。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-08-28 DOI: 10.1117/1.JMI.11.4.045504
Sarah J Lewis, Jayden B Wells, Warren M Reed, Claudia Mello-Thoms, Peter A O'Reilly, Marion Dimigen
{"title":"Use of reporting templates for chest radiographs in a coronavirus disease 2019 context: measuring concordance of radiologists with three international templates.","authors":"Sarah J Lewis, Jayden B Wells, Warren M Reed, Claudia Mello-Thoms, Peter A O'Reilly, Marion Dimigen","doi":"10.1117/1.JMI.11.4.045504","DOIUrl":"https://doi.org/10.1117/1.JMI.11.4.045504","url":null,"abstract":"<p><strong>Purpose: </strong>Reporting templates for chest radiographs (CXRs) for patients presenting or being clinically managed for severe acute respiratory syndrome coronavirus 2 [coronavirus disease 2019 (COVID-19)] has attracted advocacy from international radiology societies. We aim to explore the effectiveness and useability of three international templates through the concordance of, and between, radiologists reporting on the presence and severity of COVID-19 on CXRs.</p><p><strong>Approach: </strong>Seventy CXRs were obtained from a referral hospital, 50 from patients with COVID-19 (30 rated \"classic\" COVID-19 appearance and 20 \"indeterminate\") and 10 \"normal\" and 10 \"alternative pathology\" CXRs. The recruited radiologists were assigned to three test sets with the same CXRs but with different template orders. Each radiologist read their test set three times and assigned a classification to the CXR using the Royal Australian New Zealand College of Radiology (RANZCR), British Society of Thoracic Imaging (BSTI), and Modified COVID-19 Reporting and Data System (Dutch; mCO-RADS) templates. Inter-reader variability and intra-reader variability were measured using Fleiss' kappa coefficient.</p><p><strong>Results: </strong>Twelve Australian radiologists participated. The BSTI template had the highest inter-reader agreement (0.46; \"moderate\" agreement), followed by RANZCR (0.45) and mCO-RADS (0.32). Concordance was driven by strong agreement in \"normal\" and \"alternative\" classifications and was lowest for \"indeterminate.\" General consistency was observed across classifications and templates, with intra-reader variability ranging from \"good\" to \"very good\" for COVID-19 CXRs (0.61), \"normal\" CXRs (0.76), and \"alternative\" (0.68).</p><p><strong>Conclusions: </strong>Reporting templates may be useful in reducing variation among radiology reports, with intra-reader variability showing promise. Feasibility and implementation require a wider approach including referring and treating doctors plus the development of training packages for radiologists specific to the template being used.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11349612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning carotid vessel wall segmentation in black-blood MRI using sparsely sampled cross-sections from 3D data. 利用三维数据稀疏采样横截面学习黑血磁共振成像中的颈动脉血管壁分割。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-07-12 DOI: 10.1117/1.JMI.11.4.044503
Hinrich Rahlfs, Markus Hüllebrand, Sebastian Schmitter, Christoph Strecker, Andreas Harloff, Anja Hennemuth
{"title":"Learning carotid vessel wall segmentation in black-blood MRI using sparsely sampled cross-sections from 3D data.","authors":"Hinrich Rahlfs, Markus Hüllebrand, Sebastian Schmitter, Christoph Strecker, Andreas Harloff, Anja Hennemuth","doi":"10.1117/1.JMI.11.4.044503","DOIUrl":"10.1117/1.JMI.11.4.044503","url":null,"abstract":"<p><strong>Purpose: </strong>Atherosclerosis of the carotid artery is a major risk factor for stroke. Quantitative assessment of the carotid vessel wall can be based on cross-sections of three-dimensional (3D) black-blood magnetic resonance imaging (MRI). To increase reproducibility, a reliable automatic segmentation in these cross-sections is essential.</p><p><strong>Approach: </strong>We propose an automatic segmentation of the carotid artery in cross-sections perpendicular to the centerline to make the segmentation invariant to the image plane orientation and allow a correct assessment of the vessel wall thickness (VWT). We trained a residual U-Net on eight sparsely sampled cross-sections per carotid artery and evaluated if the model can segment areas that are not represented in the training data. We used 218 MRI datasets of 121 subjects that show hypertension and plaque in the ICA or CCA measuring <math><mrow><mo>≥</mo> <mn>1.5</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> in ultrasound.</p><p><strong>Results: </strong>The model achieves a high mean Dice coefficient of 0.948/0.859 for the vessel's lumen/wall, a low mean Hausdorff distance of <math><mrow><mn>0.417</mn> <mo>/</mo> <mn>0.660</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> , and a low mean average contour distance of <math><mrow><mn>0.094</mn> <mo>/</mo> <mn>0.119</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> on the test set. The model reaches similar results for regions of the carotid artery that are not incorporated in the training set and on MRI of young, healthy subjects. The model also achieves a low median Hausdorff distance of <math><mrow><mn>0.437</mn> <mo>/</mo> <mn>0.552</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> on the 2021 Carotid Artery Vessel Wall Segmentation Challenge test set.</p><p><strong>Conclusions: </strong>The proposed method can reduce the effort for carotid artery vessel wall assessment. Together with human supervision, it can be used for clinical applications, as it allows a reliable measurement of the VWT for different patient demographics and MRI acquisition settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11245174/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信