Journal of Medical Imaging最新文献

筛选
英文 中文
Correlation of objective image quality metrics with radiologists' diagnostic confidence depends on the clinical task performed. 客观图像质量指标与放射科医生诊断信心的相关性取决于所执行的临床任务。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-04-11 DOI: 10.1117/1.JMI.12.5.051803
Michelle C Pryde, James Rioux, Adela Elena Cora, David Volders, Matthias H Schmidt, Mohammed Abdolell, Chris Bowen, Steven D Beyea
{"title":"Correlation of objective image quality metrics with radiologists' diagnostic confidence depends on the clinical task performed.","authors":"Michelle C Pryde, James Rioux, Adela Elena Cora, David Volders, Matthias H Schmidt, Mohammed Abdolell, Chris Bowen, Steven D Beyea","doi":"10.1117/1.JMI.12.5.051803","DOIUrl":"10.1117/1.JMI.12.5.051803","url":null,"abstract":"<p><strong>Purpose: </strong>Objective image quality metrics (IQMs) are widely used as outcome measures to assess acquisition and reconstruction strategies for diagnostic images. For nonpathological magnetic resonance (MR) images, these IQMs correlate to varying degrees with expert radiologists' confidence scores of overall perceived diagnostic image quality. However, it is unclear whether IQMs also correlate with task-specific diagnostic image quality or expert radiologists' confidence in performing a specific diagnostic task, which calls into question their use as surrogates for radiologist opinion.</p><p><strong>Approach: </strong>0.5 T MR images from 16 stroke patients and two healthy volunteers were retrospectively undersampled ( <math><mrow><mi>R</mi> <mo>=</mo> <mn>1</mn></mrow> </math> to <math><mrow><mn>7</mn> <mo>×</mo></mrow> </math> ) and reconstructed via compressed sensing. Three neuroradiologists reported the presence/absence of acute ischemic stroke (AIS) and assigned a Fazekas score describing the extent of chronic ischemic lesion burden. Neuroradiologists ranked their confidence in performing each task using a 1 to 5 Likert scale. Confidence scores were correlated with noise quality measure, the visual information fidelity criterion, the feature similarity index, root mean square error, and structural similarity (SSIM) via nonlinear regression modeling.</p><p><strong>Results: </strong>Although acceleration alters image quality, neuroradiologists remain able to report pathology. All of the IQMs tested correlated to some degree with diagnostic confidence for assessing chronic ischemic lesion burden, but none correlated with diagnostic confidence in diagnosing the presence/absence of AIS due to consistent radiologist performance regardless of image degradation.</p><p><strong>Conclusions: </strong>Accelerated images were helpful for understanding the ability of IQMs to assess task-specific diagnostic image quality in the context of chronic ischemic lesion burden, although not in the case of AIS diagnosis. These findings suggest that commonly used IQMs, such as the SSIM index, do not necessarily indicate an image's utility when performing certain diagnostic tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051803"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11991859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast-enhanced spectral mammography demonstrates better inter-reader repeatability than digital mammography for screening breast cancer patients. 对比增强光谱乳房x线摄影显示更好的阅读器间重复性比数字乳房x线摄影筛查乳腺癌患者。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-06-18 DOI: 10.1117/1.JMI.12.5.051806
Alisa Mohebbi, Ali Abdi, Saeed Mohammadzadeh, Mohammad Mirza-Aghazadeh-Attari, Ali Abbasian Ardakani, Afshin Mohammadi
{"title":"Contrast-enhanced spectral mammography demonstrates better inter-reader repeatability than digital mammography for screening breast cancer patients.","authors":"Alisa Mohebbi, Ali Abdi, Saeed Mohammadzadeh, Mohammad Mirza-Aghazadeh-Attari, Ali Abbasian Ardakani, Afshin Mohammadi","doi":"10.1117/1.JMI.12.5.051806","DOIUrl":"10.1117/1.JMI.12.5.051806","url":null,"abstract":"<p><strong>Purpose: </strong>Our purpose is to assess the inter-rater agreement between digital mammography (DM) and contrast-enhanced spectral mammography (CESM) in evaluating the Breast Imaging Reporting and Data System (BI-RADS) grading.</p><p><strong>Approach: </strong>This retrospective study included 326 patients recruited between January 2019 and February 2021. The study protocol was pre-registered on the Open Science Framework platform. Two expert radiologists interpreted the CESM and DM findings. Pathological data are used for radiologically suspicious or malignant-appearing lesions, whereas follow-up was considered the gold standard for benign-appearing lesions and breasts without lesions.</p><p><strong>Results: </strong>For intra-device agreement, both imaging modalities showed \"almost perfect\" agreement, indicating that different radiologists are expected to report the same BI-RADS score for the same image. Despite showing a similar interpretation, a paired <math><mrow><mi>t</mi></mrow> </math> -test showed significantly higher agreement for CESM compared with DM ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ). Subgrouping based on the side or view did not show a considerable difference for both imaging modalities. For inter-device agreement, \"almost perfect\" agreement was also achieved. However, for proven malignant lesions, an overall higher BI-RADS score was achieved for CESM, whereas for benign or normal breasts, a lower BI-RADS score was reported, indicating a more precise BI-RADS classification for CESM compared with DM.</p><p><strong>Conclusions: </strong>Our findings demonstrated strong agreement among readers regarding the identification of DM and CESM findings in breast images from various views. Moreover, it indicates that CESM is equally precise compared with DM and can be used as an alternative in clinical centers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051806"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breast cancer survivors' perceptual map of breast reconstruction appearance outcomes. 乳腺癌幸存者对乳房重建外观结果的感知图。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-03-19 DOI: 10.1117/1.JMI.12.5.051802
Haoqi Wang, Xiomara T Gonzalez, Gabriela A Renta-López, Mary Catherine Bordes, Michael C Hout, Seung W Choi, Gregory P Reece, Mia K Markey
{"title":"Breast cancer survivors' perceptual map of breast reconstruction appearance outcomes.","authors":"Haoqi Wang, Xiomara T Gonzalez, Gabriela A Renta-López, Mary Catherine Bordes, Michael C Hout, Seung W Choi, Gregory P Reece, Mia K Markey","doi":"10.1117/1.JMI.12.5.051802","DOIUrl":"10.1117/1.JMI.12.5.051802","url":null,"abstract":"<p><strong>Purpose: </strong>It is often hard for patients to articulate their expectations about breast reconstruction appearance outcomes to their providers. Our overarching goal is to develop a tool to help patients visually express what they expect to look like after reconstruction. We aim to comprehensively understand how breast cancer survivors perceive diverse breast appearance states by mapping them onto a low-dimensional Euclidean space, which simplifies the complex information about perceptual similarity relationships into a more interpretable form.</p><p><strong>Approach: </strong>We recruited breast cancer survivors and conducted observer experiments to assess the visual similarities among clinical photographs depicting a range of appearances of the torso relevant to breast reconstruction. Then, we developed a perceptual map to illuminate how breast cancer survivors perceive and distinguish among these appearance states.</p><p><strong>Results: </strong>We sampled 100 photographs as stimuli and recruited 34 breast cancer survivors locally. The resulting perceptual map, constructed in two dimensions, offers valuable insights into factors influencing breast cancer survivors' perceptions of breast reconstruction outcomes. Our findings highlight specific aspects, such as the number of nipples, symmetry, ptosis, scars, and breast shape, that emerge as particularly noteworthy for breast cancer survivors.</p><p><strong>Conclusions: </strong>Analysis of the perceptual map identified factors associated with breast cancer survivors' perceptions of breast appearance states that should be emphasized in the appearance consultation process. The perceptual map could be used to assist patients in visually expressing what they expect to look like. Our study lays the groundwork for evaluating interventions intended to help patients form realistic expectations.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051802"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11921042/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning evaluation of pneumonia severity: subgroup performance in the Medical Imaging and Data Resource Center modified radiographic assessment of lung edema mastermind challenge. 肺炎严重程度的机器学习评估:医学成像和数据资源中心改进的肺水肿主脑挑战影像学评估的亚组表现。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-10-07 DOI: 10.1117/1.JMI.12.5.054502
Karen Drukker, Samuel G Armato, Lubomir Hadjiiski, Judy Gichoya, Nicholas Gruszauskas, Jayashree Kalpathy-Cramer, Hui Li, Kyle J Myers, Robert M Tomek, Heather M Whitney, Zi Zhang, Maryellen L Giger
{"title":"Machine learning evaluation of pneumonia severity: subgroup performance in the Medical Imaging and Data Resource Center modified radiographic assessment of lung edema mastermind challenge.","authors":"Karen Drukker, Samuel G Armato, Lubomir Hadjiiski, Judy Gichoya, Nicholas Gruszauskas, Jayashree Kalpathy-Cramer, Hui Li, Kyle J Myers, Robert M Tomek, Heather M Whitney, Zi Zhang, Maryellen L Giger","doi":"10.1117/1.JMI.12.5.054502","DOIUrl":"10.1117/1.JMI.12.5.054502","url":null,"abstract":"<p><strong>Purpose: </strong>The Medical Imaging and Data Resource Center Mastermind Grand Challenge of modified radiographic assessment of lung edema (mRALE) tasked participants with developing machine learning techniques for automated COVID-19 severity assessment via mRALE scores on portable chest radiographs (CXRs). We examine potential biases across demographic subgroups for the best-performing models of the nine teams participating in the test phase of the challenge.</p><p><strong>Approach: </strong>Models were evaluated against a nonpublic test set of CXRs (814 patients) annotated by radiologists for disease severity (mRALE score 0 to 24). Participants used a variety of data and methods for training. Performance was measured using quadratic-weighted kappa (QWK). Bias analyses considered demographics (sex, age, race, ethnicity, and their intersections) using QWK. In addition, for distinguishing no/mild versus moderate/severe disease, equal opportunity difference (EOD) and average absolute odds difference (AAOD) were calculated. Bias was defined as statistically significant QWK subgroup differences, or EOD outside [ <math><mrow><mo>-</mo> <mn>0.1</mn></mrow> </math> ; 0.1], or AAOD outside [0; 0.1].</p><p><strong>Results: </strong>The nine models demonstrated good agreement with the reference standard (QWK 0.74 to 0.88). The winning model (QWK = 0.884 [0.819; 0.949]) was the only model without biases identified in terms of QWK. The runner-up model (QWK = 0.874 [0.813; 0.936]) showed no identified biases in terms of EOD and AAOD, whereas the winning model disadvantaged three subgroups in each of these metrics. The median number of disadvantaged subgroups for all models was 3.</p><p><strong>Conclusions: </strong>The challenge demonstrated strong model performances but identified subgroup disparities. Bias analysis is essential as models with similar accuracy may exhibit varying fairness.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054502"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12503059/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-learning-based estimation of left ventricle myocardial strain from echocardiograms with occlusion artifacts. 基于深度学习的超声心动图左心室心肌应变估计。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-09-27 DOI: 10.1117/1.JMI.12.5.054002
Alan Romero-Pacheco, Nidiyare Hevia-Montiel, Blanca Vazquez, Fernando Arámbula Cosío, Jorge Perez-Gonzalez
{"title":"Deep-learning-based estimation of left ventricle myocardial strain from echocardiograms with occlusion artifacts.","authors":"Alan Romero-Pacheco, Nidiyare Hevia-Montiel, Blanca Vazquez, Fernando Arámbula Cosío, Jorge Perez-Gonzalez","doi":"10.1117/1.JMI.12.5.054002","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054002","url":null,"abstract":"<p><strong>Purpose: </strong>We present a deep-learning-based methodology for estimating deformation in 2D echocardiograms. The goal is to automatically estimate the longitudinal strain of the left ventricle (LV) walls in images affected by speckle noise and acoustic occlusions.</p><p><strong>Approach: </strong>The proposed methodology integrates algorithms for converting sparse to dense flow, a Res-UNet architecture for automatic myocardium segmentation, flow estimation using a global motion aggregation network, and the computation of longitudinal strain curves and the global longitudinal strain (GLS) index. The approach was evaluated using two echocardiographic datasets in apical four-chamber view, both modified with noise and acoustic shadows. The CAMUS dataset ( <math><mrow><mi>N</mi> <mo>=</mo> <mn>250</mn></mrow> </math> ) was used for LV wall segmentation, whereas a synthetic image database ( <math><mrow><mi>N</mi> <mo>=</mo> <mn>2037</mn></mrow> </math> ) was employed for flow estimation.</p><p><strong>Results: </strong>Among the main performance metrics achieved are 98% [96 to 99] of correlation in the conversion from sparse to dense flow, a Dice index of <math><mrow><mn>88.2</mn> <mo>%</mo> <mo>±</mo> <mn>3.8</mn> <mo>%</mo></mrow> </math> for myocardial segmentation, an endpoint error of 0.133 [0.13 to 0.14] pixels in flow estimation, and an error of 1.34% [0.94 to 2.09] in the estimation of the GLS index.</p><p><strong>Conclusions: </strong>The results demonstrate improvements over previously reported performances while maintaining stability in echocardiograms with acoustic shadows. This methodology could be useful in clinical practice for the analysis of echocardiograms with noise artifacts and acoustic occlusions. Our code and trained models are publicly available at https://github.com/ArBioIIMAS/echo-gma.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054002"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476231/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145187211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of intra- and inter-observer variability in 4D liver ultrasound landmark labeling. 四维肝脏超声标志物标记的观察者间和观察者内变异分析。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-06-30 DOI: 10.1117/1.JMI.12.5.051807
Daniel Wulff, Floris Ernst
{"title":"Analysis of intra- and inter-observer variability in 4D liver ultrasound landmark labeling.","authors":"Daniel Wulff, Floris Ernst","doi":"10.1117/1.JMI.12.5.051807","DOIUrl":"10.1117/1.JMI.12.5.051807","url":null,"abstract":"<p><strong>Purpose: </strong>Four-dimensional (4D) ultrasound imaging is widely used in clinics for diagnostics and therapy guidance. Accurate target tracking in 4D ultrasound is crucial for autonomous therapy guidance systems, such as radiotherapy, where precise tumor localization ensures effective treatment. Supervised deep learning approaches rely on reliable ground truth, making accurate labels essential. We investigate the reliability of expert-labeled ground truth data by evaluating intra- and inter-observer variability in landmark labeling for 4D ultrasound imaging in the liver.</p><p><strong>Approach: </strong>Eight 4D liver ultrasound sequences were labeled by eight expert observers, each labeling eight landmarks three times. Intra- and inter-observer variability was quantified, and observer survey and motion analysis were conducted to determine factors influencing labeling accuracy, such as ultrasound artifacts and motion amplitude.</p><p><strong>Results: </strong>The mean intra-observer variability ranged from <math><mrow><mn>1.58</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>0.90</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> to <math><mrow><mn>2.05</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.22</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> depending on the observer. The inter-observer variability for the two observer groups was <math><mrow><mn>2.68</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.69</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> and <math><mrow><mn>3.06</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.74</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . The observer survey and motion analysis revealed that ultrasound artifacts significantly affected labeling accuracy due to limited landmark visibility, whereas motion amplitude had no measurable effect. Our measured mean landmark motion was <math><mrow><mn>11.56</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>5.86</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> .</p><p><strong>Conclusions: </strong>We highlight variability in expert-labeled ground truth data for 4D ultrasound imaging and identify ultrasound artifacts as a major source of labeling inaccuracies. These findings underscore the importance of addressing observer variability and artifact-related challenges to improve the reliability of ground truth data for evaluating target tracking algorithms in 4D ultrasound applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051807"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12207815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144545488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of algorithmic requirements for clinical application of material decomposition using a multi-layer flat panel detector. 多层平板探测器材料分解临床应用的算法要求评价。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-09-04 DOI: 10.1117/1.JMI.12.5.053501
Jamin Schaefer, Steffen Kappler, Ferdinand Lueck, Ludwig Ritschl, Thomas Weber, Georg Rose
{"title":"Evaluation of algorithmic requirements for clinical application of material decomposition using a multi-layer flat panel detector.","authors":"Jamin Schaefer, Steffen Kappler, Ferdinand Lueck, Ludwig Ritschl, Thomas Weber, Georg Rose","doi":"10.1117/1.JMI.12.5.053501","DOIUrl":"10.1117/1.JMI.12.5.053501","url":null,"abstract":"<p><strong>Purpose: </strong>The combination of multi-layer flat panel detector (FPDT) X-ray imaging and physics-based material decomposition algorithms allows for the removal of anatomical structures. However, the reliability of these algorithms may be compromised by unaccounted materials or scattered radiation.</p><p><strong>Approach: </strong>We investigated the two-material decomposition performance of a multi-layer FPDT in the context of 2D chest radiography without and with a 13:1 anti-scatter grid employed. A matrix-based material decomposition (MBMD) (equivalent to weighted logarithmic subtraction), a matrix-based material decomposition with polynomial beam hardening pre-correction (MBMD-PBC), and a projection domain decomposition were evaluated. The decomposition accuracy of simulated data was evaluated by comparing the bone and soft tissue images to the ground truth using the structural similarity index measure (SSIM). Simulation results were supported by experiments using a commercially available triple-layer FPDT retrofitted to a digital X-ray system.</p><p><strong>Results: </strong>Independent of the selected decomposition algorithm, uncorrected scatter leads to negative bone estimates, resulting in small SSIM values and bone structures to remain visible in soft tissue images. Even with a 13:1 anti-scatter grid employed, bone images continue to show negative bone estimates, and bone structures appear in soft tissue images. Adipose tissue on the contrary has an almost negligible effect.</p><p><strong>Conclusions: </strong>In a contact scan, scattered radiation leads to negative bone contrast estimates in the bone images and remaining bone contrast in the soft tissue images. Therefore, accurate scatter estimation and correction algorithms are essential when aiming for material decomposition using image data obtained with a multi-layer FPDT.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"053501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410756/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using a limited field of view to improve training for pulmonary nodule detection on radiographs. 利用有限视场改进x线片上肺结节检测的培训。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-04-25 DOI: 10.1117/1.JMI.12.5.051804
Samual K Zenger, Rishabh Agarwal, William F Auffermann
{"title":"Using a limited field of view to improve training for pulmonary nodule detection on radiographs.","authors":"Samual K Zenger, Rishabh Agarwal, William F Auffermann","doi":"10.1117/1.JMI.12.5.051804","DOIUrl":"10.1117/1.JMI.12.5.051804","url":null,"abstract":"<p><strong>Purpose: </strong>Perceptual error is a significant cause of medical errors in radiology. Given the amount of information in a medical image, an image interpreter may become distracted by information unrelated to their search pattern. This may be especially challenging for novices. We aim to examine teaching medical trainees to evaluate chest radiographs (CXRs) for pulmonary nodules on limited field-of-view (LFOV) images, with the field of view (FOV) restricted to the lungs and mediastinum.</p><p><strong>Approach: </strong>Healthcare trainees with limited exposure to interpreting images were asked to identify pulmonary nodules on CXRs, half of which contained nodules. The control and experimental groups evaluated two sets of CXRs. After the first set, the experimental group was trained to evaluate LFOV images, and both groups were again asked to assess CXRs for pulmonary nodules. Participants were given surveys after this educational session to determine their thoughts about the training and symptoms of computer vision syndrome (CVS).</p><p><strong>Results: </strong>There was a significant improvement in performance in pulmonary nodule identification for both the experimental and control groups, but the improvement was more considerable in the experimental group ( <math><mrow><mi>p</mi> <mtext>-</mtext> <mtext>value</mtext> <mo>=</mo> <mn>0.022</mn></mrow> </math> ). Survey responses were uniformly positive, and each question was statistically significant (all <math><mrow><mi>p</mi> <mtext>-</mtext> <mtext>values</mtext> <mo><</mo> <mn>0.001</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>Our results show that using LFOV images may be helpful when teaching trainees specific high-yield perceptual tasks, such as nodule identification. The use of LFOV images was associated with reduced symptoms of CVS.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051804"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12023444/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144021338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DMM-UNet: dual-path multi-scale Mamba UNet for medical image segmentation. DMM-UNet:用于医学图像分割的双路径多尺度曼巴UNet。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-09-29 DOI: 10.1117/1.JMI.12.5.054003
Liquan Zhao, Mingxia Cao, Yanfei Jia
{"title":"DMM-UNet: dual-path multi-scale Mamba UNet for medical image segmentation.","authors":"Liquan Zhao, Mingxia Cao, Yanfei Jia","doi":"10.1117/1.JMI.12.5.054003","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054003","url":null,"abstract":"<p><strong>Purpose: </strong>State space models have shown promise in medical image segmentation by modeling long-range dependencies with linear complexity. However, they are limited in their ability to capture local features, which hinders their capacity to extract multiscale details and integrate global and local contextual information effectively. To address these shortcomings, we propose the dual-path multi-scale Mamba UNet (DMM-UNet) model.</p><p><strong>Approach: </strong>This architecture facilitates deep fusion of local and global features through multi-scale modules within a U-shaped encoder-decoder framework. First, we introduce the multi-scale channel attention selective scanning block in the encoder, which combines global selective scanning with multi-scale channel attention to model both long-range and local dependencies simultaneously. Second, we design the spatial attention selective scanning block for the decoder. This block integrates global scanning with spatial attention mechanisms, enabling precise aggregation of semantic features through gated weighting. Finally, we develop the multi-dimensional collaborative attention layer to extract complementary attention weights across height, width, and channel dimensions, facilitating cross-space-channel feature interactions.</p><p><strong>Results: </strong>Experiments were conducted on the ISIC17, ISIC18, Synapse, and ACDC datasets. One of the indicators, Dice similarity coefficient, achieved 89.88% on the ISIC17 dataset, 90.52% on the ISIC18 dataset, 83.07% on the Synapse dataset, and 92.60% on the ACDC dataset. There are also other indicators that perform well on this model.</p><p><strong>Conclusions: </strong>The DMM-UNet model effectively addresses the shortcomings of state space models by enabling the integration of both local and global features, improving segmentation performance, and offering enhanced multiscale feature fusion for medical image segmentation tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054003"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12480969/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-learning-based washout classification for decision support in contrast-enhanced ultrasound examinations of the liver. 基于深度学习的洗刷分类在肝脏超声造影检查中的决策支持。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-07-01 Epub Date: 2025-07-22 DOI: 10.1117/1.JMI.12.4.044502
Hannah Strohm, Sven Rothlübbers, Jürgen Jenne, Dirk-André Clevert, Thomas Fischer, Niklas Hitschrich, Bernhard Mumm, Paul Spiesecke, Matthias Günther
{"title":"Deep-learning-based washout classification for decision support in contrast-enhanced ultrasound examinations of the liver.","authors":"Hannah Strohm, Sven Rothlübbers, Jürgen Jenne, Dirk-André Clevert, Thomas Fischer, Niklas Hitschrich, Bernhard Mumm, Paul Spiesecke, Matthias Günther","doi":"10.1117/1.JMI.12.4.044502","DOIUrl":"https://doi.org/10.1117/1.JMI.12.4.044502","url":null,"abstract":"<p><strong>Purpose: </strong>Contrast-enhanced ultrasound (CEUS) is a reliable tool to diagnose focal liver lesions, which appear ambiguous in normal B-mode ultrasound. However, interpretation of the dynamic contrast sequences can be challenging, hindering the widespread application of CEUS. We investigate the use of a deep-learning-based image classifier for determining the diagnosis-relevant feature washout from CEUS acquisitions.</p><p><strong>Approach: </strong>We introduce a data representation, which is agnostic to data heterogeneity regarding lesion size, subtype, and length of the sequences. Then, an image-based classifier is exploited for washout classification. Strategies to cope with sparse annotations and motion are systematically evaluated, as well as the potential benefits of using a perfusion model to cover missing time points.</p><p><strong>Results: </strong>Results indicate decent performance comparable to studies found in the literature, with a maximum balanced accuracy of 84.0% on the validation and 82.0% on the test set. Correlation-based frame selection yielded improvements in classification performance, whereas further motion compensation did not show any benefit in the conducted experiments.</p><p><strong>Conclusions: </strong>It is shown that deep-learning-based washout classification is feasible in principle. It offers a simple form of interpretability compared with benign versus malignant classifications. The concept of classifying individual features instead of the diagnosis itself could be extended to other features such as the arterial inflow behavior. The main factors distinguishing it from existing approaches are the data representation and task formulation, as well as a large dataset size with 500 liver lesions from two centers for algorithmic development and testing.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044502"},"PeriodicalIF":1.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12279466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信