Journal of Medical Imaging最新文献

筛选
英文 中文
Automatic hepatic tumor segmentation in intra-operative ultrasound: a supervised deep-learning approach. 术中超声中的肝肿瘤自动分割:一种有监督的深度学习方法。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-12 DOI: 10.1117/1.JMI.11.2.024501
Tiziano Natali, Andrey Zhylka, Karin Olthof, Jasper N Smit, Tarik R Baetens, Niels F M Kok, Koert F D Kuhlmann, Oleksandra Ivashchenko, Theo J M Ruers, Matteo Fusaglia
{"title":"Automatic hepatic tumor segmentation in intra-operative ultrasound: a supervised deep-learning approach.","authors":"Tiziano Natali, Andrey Zhylka, Karin Olthof, Jasper N Smit, Tarik R Baetens, Niels F M Kok, Koert F D Kuhlmann, Oleksandra Ivashchenko, Theo J M Ruers, Matteo Fusaglia","doi":"10.1117/1.JMI.11.2.024501","DOIUrl":"10.1117/1.JMI.11.2.024501","url":null,"abstract":"<p><strong>Purpose: </strong>Training and evaluation of the performance of a supervised deep-learning model for the segmentation of hepatic tumors from intraoperative US (iUS) images, with the purpose of improving the accuracy of tumor margin assessment during liver surgeries and the detection of lesions during colorectal surgeries.</p><p><strong>Approach: </strong>In this retrospective study, a U-Net network was trained with the nnU-Net framework in different configurations for the segmentation of CRLM from iUS. The model was trained on B-mode intraoperative hepatic US images, hand-labeled by an expert clinician. The model was tested on an independent set of similar images. The average age of the study population was 61.9 ± 9.9 years. Ground truth for the test set was provided by a radiologist, and three extra delineation sets were used for the computation of inter-observer variability.</p><p><strong>Results: </strong>The presented model achieved a DSC of 0.84 (<math><mrow><mi>p</mi><mo>=</mo><mn>0.0037</mn></mrow></math>), which is comparable to the expert human raters scores. The model segmented hypoechoic and mixed lesions more accurately (DSC of 0.89 and 0.88, respectively) than hyper- and isoechoic ones (DSC of 0.70 and 0.60, respectively) only missing isoechoic or >20 mm in diameter (8% of the tumors) lesions. The inclusion of extra margins of probable tumor tissue around the lesions in the training ground truth resulted in lower DSCs of 0.75 (<math><mrow><mi>p</mi><mo>=</mo><mn>0.0022</mn></mrow></math>).</p><p><strong>Conclusion: </strong>The model can accurately segment hepatic tumors from iUS images and has the potential to speed up the resection margin definition during surgeries and the detection of lesion in screenings by automating iUS assessment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10929734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140121033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting bone lesions in X-ray under diverse acquisition conditions. 在不同采集条件下检测 X 射线中的骨质病变。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-19 DOI: 10.1117/1.JMI.11.2.024502
Tal Zimbalist, Ronnie Rosen, Keren Peri-Hanania, Yaron Caspi, Bar Rinott, Carmel Zeltser-Dekel, Eyal Bercovich, Yonina C Eldar, Shai Bagon
{"title":"Detecting bone lesions in X-ray under diverse acquisition conditions.","authors":"Tal Zimbalist, Ronnie Rosen, Keren Peri-Hanania, Yaron Caspi, Bar Rinott, Carmel Zeltser-Dekel, Eyal Bercovich, Yonina C Eldar, Shai Bagon","doi":"10.1117/1.JMI.11.2.024502","DOIUrl":"10.1117/1.JMI.11.2.024502","url":null,"abstract":"<p><strong>Purpose: </strong>The diagnosis of primary bone tumors is challenging as the initial complaints are often non-specific. The early detection of bone cancer is crucial for a favorable prognosis. Incidentally, lesions may be found on radiographs obtained for other reasons. However, these early indications are often missed. We propose an automatic algorithm to detect bone lesions in conventional radiographs to facilitate early diagnosis. Detecting lesions in such radiographs is challenging. First, the prevalence of bone cancer is very low; any method must show high precision to avoid a prohibitive number of false alarms. Second, radiographs taken in health maintenance organizations (HMOs) or emergency departments (EDs) suffer from inherent diversity due to different X-ray machines, technicians, and imaging protocols. This diversity poses a major challenge to any automatic analysis method.</p><p><strong>Approach: </strong>We propose training an off-the-shelf object detection algorithm to detect lesions in radiographs. The novelty of our approach stems from a dedicated preprocessing stage that directly addresses the diversity of the data. The preprocessing consists of self-supervised region-of-interest detection using vision transformer (ViT), and a foreground-based histogram equalization for contrast enhancement to relevant regions only.</p><p><strong>Results: </strong>We evaluate our method via a retrospective study that analyzes bone tumors on radiographs acquired from January 2003 to December 2018 under diverse acquisition protocols. Our method obtains 82.43% sensitivity at a 1.5% false-positive rate and surpasses existing preprocessing methods. For lesion detection, our method achieves 82.5% accuracy and an IoU of 0.69.</p><p><strong>Conclusions: </strong>The proposed preprocessing method enables effectively coping with the inherent diversity of radiographs acquired in HMOs and EDs.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10950029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning based prediction of image quality in prostate MRI using rapid localizer images. 基于机器学习的前列腺 MRI 图像质量预测(使用快速定位器图像)。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 DOI: 10.1117/1.JMI.11.2.026001
Abdullah Al-Hayali, Amin Komeili, Azar Azad, Paul Sathiadoss, Nicola Schieda, Eranga Ukwatta
{"title":"Machine learning based prediction of image quality in prostate MRI using rapid localizer images.","authors":"Abdullah Al-Hayali, Amin Komeili, Azar Azad, Paul Sathiadoss, Nicola Schieda, Eranga Ukwatta","doi":"10.1117/1.JMI.11.2.026001","DOIUrl":"10.1117/1.JMI.11.2.026001","url":null,"abstract":"<p><strong>Purpose: </strong>Diagnostic performance of prostate MRI depends on high-quality imaging. Prostate MRI quality is inversely proportional to the amount of rectal gas and distention. Early detection of poor-quality MRI may enable intervention to remove gas or exam rescheduling, saving time. We developed a machine learning based quality prediction of yet-to-be acquired MRI images solely based on MRI rapid localizer sequence, which can be acquired in a few seconds.</p><p><strong>Approach: </strong>The dataset consists of 213 (147 for training and 64 for testing) prostate sagittal T2-weighted (T2W) MRI localizer images and rectal content, manually labeled by an expert radiologist. Each MRI localizer contains seven two-dimensional (2D) slices of the patient, accompanied by manual segmentations of rectum for each slice. Cascaded and end-to-end deep learning models were used to predict the quality of yet-to-be T2W, DWI, and apparent diffusion coefficient (ADC) MRI images. Predictions were compared to quality scores determined by the experts using area under the receiver operator characteristic curve and intra-class correlation coefficient.</p><p><strong>Results: </strong>In the test set of 64 patients, optimal versus suboptimal exams occurred in 95.3% (61/64) versus 4.7% (3/64) for T2W, 90.6% (58/64) versus 9.4% (6/64) for DWI, and 89.1% (57/64) versus 10.9% (7/64) for ADC. The best performing segmentation model was 2D U-Net with ResNet-34 encoder and ImageNet weights. The best performing classifier was the radiomics based classifier.</p><p><strong>Conclusions: </strong>A radiomics based classifier applied to localizer images achieves accurate diagnosis of subsequent image quality for T2W, DWI, and ADC prostate MRI sequences.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10905647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140022894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation of acquisition shifts in T2 weighted fluid-attenuated inversion recovery magnetic resonance images to stress test artificial intelligence segmentation networks. 模拟 T2 加权流体衰减反转恢复磁共振图像的采集偏移,对人工智能分割网络进行压力测试。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-04-24 DOI: 10.1117/1.JMI.11.2.024013
Christiane Posselt, Mehmet Yigit Avci, Mehmet Yigitsoy, Patrick Schuenke, Christoph Kolbitsch, Tobias Schaeffter, Stefanie Remmele
{"title":"Simulation of acquisition shifts in T2 weighted fluid-attenuated inversion recovery magnetic resonance images to stress test artificial intelligence segmentation networks.","authors":"Christiane Posselt, Mehmet Yigit Avci, Mehmet Yigitsoy, Patrick Schuenke, Christoph Kolbitsch, Tobias Schaeffter, Stefanie Remmele","doi":"10.1117/1.JMI.11.2.024013","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024013","url":null,"abstract":"<p><strong>Purpose: </strong>To provide a simulation framework for routine neuroimaging test data, which allows for \"stress testing\" of deep segmentation networks against acquisition shifts that commonly occur in clinical practice for T2 weighted (T2w) fluid-attenuated inversion recovery magnetic resonance imaging protocols.</p><p><strong>Approach: </strong>The approach simulates \"acquisition shift derivatives\" of MR images based on MR signal equations. Experiments comprise the validation of the simulated images by real MR scans and example stress tests on state-of-the-art multiple sclerosis lesion segmentation networks to explore a generic model function to describe the F1 score in dependence of the contrast-affecting sequence parameters echo time (TE) and inversion time (TI).</p><p><strong>Results: </strong>The differences between real and simulated images range up to 19% in gray and white matter for extreme parameter settings. For the segmentation networks under test, the F1 score dependency on TE and TI can be well described by quadratic model functions (<math><mrow><msup><mi>R</mi><mn>2</mn></msup><mo>></mo><mn>0.9</mn></mrow></math>). The coefficients of the model functions indicate that changes of TE have more influence on the model performance than TI.</p><p><strong>Conclusions: </strong>We show that these deviations are in the range of values as may be caused by erroneous or individual differences in relaxation times as described by literature. The coefficients of the F1 model function allow for a quantitative comparison of the influences of TE and TI. Limitations arise mainly from tissues with a low baseline signal (like cerebrospinal fluid) and when the protocol contains contrast-affecting measures that cannot be modeled due to missing information in the DICOM header.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11042016/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIDRC-MetricTree: a decision tree-based tool for recommending performance metrics in artificial intelligence-assisted medical image analysis. MIDRC-MetricTree:基于决策树的人工智能辅助医学图像分析性能指标推荐工具。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-04-03 DOI: 10.1117/1.JMI.11.2.024504
Karen Drukker, Berkman Sahiner, Tingting Hu, Grace Hyun Kim, Heather M Whitney, Natalie Baughan, Kyle J Myers, Maryellen L Giger, Michael McNitt-Gray
{"title":"MIDRC-MetricTree: a decision tree-based tool for recommending performance metrics in artificial intelligence-assisted medical image analysis.","authors":"Karen Drukker, Berkman Sahiner, Tingting Hu, Grace Hyun Kim, Heather M Whitney, Natalie Baughan, Kyle J Myers, Maryellen L Giger, Michael McNitt-Gray","doi":"10.1117/1.JMI.11.2.024504","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024504","url":null,"abstract":"<p><strong>Purpose: </strong>The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms.</p><p><strong>Approach: </strong>An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos.</p><p><strong>Results: </strong>Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability.</p><p><strong>Conclusions: </strong>The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10990563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140868026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical assessment of the assumptions of ComBat with diffusion tensor imaging. 利用扩散张量成像对 ComBat 的假设进行经验评估。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-04-17 DOI: 10.1117/1.JMI.11.2.024011
Michael E Kim, Chenyu Gao, Leon Y Cai, Qi Yang, Nancy R Newlin, Karthik Ramadass, Angela Jefferson, Derek Archer, Niranjana Shashikumar, Kimberly R Pechman, Katherine A Gifford, Timothy J Hohman, Lori L Beason-Held, Susan M Resnick, Stefan Winzeck, Kurt G Schilling, Panpan Zhang, Daniel Moyer, Bennett A Landman
{"title":"Empirical assessment of the assumptions of ComBat with diffusion tensor imaging.","authors":"Michael E Kim, Chenyu Gao, Leon Y Cai, Qi Yang, Nancy R Newlin, Karthik Ramadass, Angela Jefferson, Derek Archer, Niranjana Shashikumar, Kimberly R Pechman, Katherine A Gifford, Timothy J Hohman, Lori L Beason-Held, Susan M Resnick, Stefan Winzeck, Kurt G Schilling, Panpan Zhang, Daniel Moyer, Bennett A Landman","doi":"10.1117/1.JMI.11.2.024011","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024011","url":null,"abstract":"<p><strong>Purpose: </strong>Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique that provides unique information about white matter microstructure in the brain but is susceptible to confounding effects introduced by scanner or acquisition differences. ComBat is a leading approach for addressing these site biases. However, despite its frequent use for harmonization, ComBat's robustness toward site dissimilarities and overall cohort size have not yet been evaluated in terms of DTI.</p><p><strong>Approach: </strong>As a baseline, we match <math><mrow><mi>N</mi><mo>=</mo><mn>358</mn></mrow></math> participants from two sites to create a \"silver standard\" that simulates a cohort for multi-site harmonization. Across sites, we harmonize mean fractional anisotropy and mean diffusivity, calculated using participant DTI data, for the regions of interest defined by the JHU EVE-Type III atlas. We bootstrap 10 iterations at 19 levels of total sample size, 10 levels of sample size imbalance between sites, and 6 levels of mean age difference between sites to quantify (i) <math><mrow><msub><mi>β</mi><mi>AGE</mi></msub></mrow></math>, the linear regression coefficient of the relationship between FA and age; (ii) <math><mrow><msubsup><mrow><mover><mrow><mi>γ</mi></mrow><mrow><mo>^</mo></mrow></mover></mrow><mrow><mi>s</mi><mi>f</mi></mrow><mrow><mo>*</mo></mrow></msubsup></mrow></math>, the ComBat-estimated site-shift; and (iii) <math><mrow><msubsup><mrow><mover><mrow><mi>δ</mi></mrow><mrow><mo>^</mo></mrow></mover></mrow><mrow><mi>s</mi><mi>f</mi></mrow><mrow><mo>*</mo></mrow></msubsup></mrow></math>, the ComBat-estimated site-scaling. We characterize the reliability of ComBat by evaluating the root mean squared error in these three metrics and examine if there is a correlation between the reliability of ComBat and a violation of assumptions.</p><p><strong>Results: </strong>ComBat remains well behaved for <math><mrow><msub><mrow><mi>β</mi></mrow><mrow><mi>AGE</mi></mrow></msub></mrow></math> when <math><mrow><mi>N</mi><mo>></mo><mn>162</mn></mrow></math> and when the mean age difference is less than 4 years. The assumptions of the ComBat model regarding the normality of residual distributions are not violated as the model becomes unstable.</p><p><strong>Conclusion: </strong>Prior to harmonization of DTI data with ComBat, the input cohort should be examined for size and covariate distributions of each site. Direct assessment of residual distributions is less informative on stability than bootstrap analysis. We caution use ComBat of in situations that do not conform to the above thresholds.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11034156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascaded cross-attention transformers and convolutional neural networks for multi-organ segmentation in male pelvic computed tomography. 用于男性盆腔计算机断层扫描多器官分割的级联交叉注意变换器和卷积神经网络
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-04-08 DOI: 10.1117/1.JMI.11.2.024009
Rahul Pemmaraju, Gayoung Kim, Lina Mekki, Daniel Y Song, Junghoon Lee
{"title":"Cascaded cross-attention transformers and convolutional neural networks for multi-organ segmentation in male pelvic computed tomography.","authors":"Rahul Pemmaraju, Gayoung Kim, Lina Mekki, Daniel Y Song, Junghoon Lee","doi":"10.1117/1.JMI.11.2.024009","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024009","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of the prostate and surrounding organs at risk from computed tomography is required for radiation therapy treatment planning. We propose an automatic two-step deep learning-based segmentation pipeline that consists of an initial multi-organ segmentation network for organ localization followed by organ-specific fine segmentation.</p><p><strong>Approach: </strong>Initial segmentation of all target organs is performed using a hybrid convolutional-transformer model, axial cross-attention UNet. The output from this model allows for region of interest computation and is used to crop tightly around individual organs for organ-specific fine segmentation. Information from this network is also propagated to the fine segmentation stage through an image enhancement module, highlighting regions of interest in the original image that might be difficult to segment. Organ-specific fine segmentation is performed on these cropped and enhanced images to produce the final output segmentation.</p><p><strong>Results: </strong>We apply the proposed approach to segment the prostate, bladder, rectum, seminal vesicles, and femoral heads from male pelvic computed tomography (CT). When tested on a held-out test set of 30 images, our two-step pipeline outperformed other deep learning-based multi-organ segmentation algorithms, achieving average dice similarity coefficient (DSC) of <math><mrow><mn>0.836</mn><mo>±</mo><mn>0.071</mn></mrow></math> (prostate), <math><mrow><mn>0.947</mn><mo>±</mo><mn>0.038</mn></mrow></math> (bladder), <math><mrow><mn>0.828</mn><mo>±</mo><mn>0.057</mn></mrow></math> (rectum), <math><mrow><mn>0.724</mn><mo>±</mo><mn>0.101</mn></mrow></math> (seminal vesicles), and <math><mrow><mn>0.933</mn><mo>±</mo><mn>0.020</mn></mrow></math> (femoral heads).</p><p><strong>Conclusions: </strong>Our results demonstrate that a two-step segmentation pipeline with initial multi-organ segmentation and additional fine segmentation can delineate male pelvic CT organs well. The utility of this additional layer of fine segmentation is most noticeable in challenging cases, as our two-step pipeline produces noticeably more accurate and less erroneous results compared to other state-of-the-art methods on such images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140863709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematic evaluation of MRI-based characterization of tumor-associated vascular morphology and hemodynamics via a dynamic digital phantom. 通过动态数字模型系统评估基于核磁共振成像的肿瘤相关血管形态和血液动力学特征。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-08 DOI: 10.1117/1.JMI.11.2.024002
Chengyue Wu, David A Hormuth, Ty Easley, Federico Pineda, Gregory S Karczmar, Thomas E Yankeelov
{"title":"Systematic evaluation of MRI-based characterization of tumor-associated vascular morphology and hemodynamics via a dynamic digital phantom.","authors":"Chengyue Wu, David A Hormuth, Ty Easley, Federico Pineda, Gregory S Karczmar, Thomas E Yankeelov","doi":"10.1117/1.JMI.11.2.024002","DOIUrl":"10.1117/1.JMI.11.2.024002","url":null,"abstract":"<p><strong>Purpose: </strong>Validation of quantitative imaging biomarkers is a challenging task, due to the difficulty in measuring the ground truth of the target biological process. A digital phantom-based framework is established to systematically validate the quantitative characterization of tumor-associated vascular morphology and hemodynamics based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI).</p><p><strong>Approach: </strong>A digital phantom is employed to provide a ground-truth vascular system within which 45 synthetic tumors are simulated. Morphological analysis is performed on high-spatial resolution DCE-MRI data (spatial/temporal resolution = 30 to <math><mrow><mn>300</mn><mtext>  </mtext><mi>μ</mi><mi>m</mi><mo>/</mo><mn>60</mn><mtext>  </mtext><mi>s</mi></mrow></math>) to determine the accuracy of locating the arterial inputs of tumor-associated vessels (TAVs). Hemodynamic analysis is then performed on the combination of high-spatial resolution and high-temporal resolution (spatial/temporal resolution = 60 to <math><mrow><mn>300</mn><mtext>  </mtext><mi>μ</mi><mi>m</mi><mo>/</mo><mn>1</mn></mrow></math> to 10 s) DCE-MRI data, determining the accuracy of estimating tumor-associated blood pressure, vascular extraction rate, interstitial pressure, and interstitial flow velocity.</p><p><strong>Results: </strong>The observed effects of acquisition settings demonstrate that, when optimizing the DCE-MRI protocol for the morphological analysis, increasing the spatial resolution is helpful but not necessary, as the location and arterial input of TAVs can be recovered with high accuracy even with the lowest investigated spatial resolution. When optimizing the DCE-MRI protocol for hemodynamic analysis, increasing the spatial resolution of the images used for vessel segmentation is essential, and the spatial and temporal resolutions of the images used for the kinetic parameter fitting require simultaneous optimization.</p><p><strong>Conclusion: </strong>An <i>in silico</i> validation framework was generated to systematically quantify the effects of image acquisition settings on the ability to accurately estimate tumor-associated characteristics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10921778/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140094911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMS-U-Net: automatic mass segmentation in digital breast tomosynthesis via U-Net. AMS-U-Net:通过 U-Net 在数字乳腺断层合成中自动分割肿块。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-23 DOI: 10.1117/1.JMI.11.2.024005
Ahmad Qasem, Genggeng Qin, Zhiguo Zhou
{"title":"AMS-U-Net: automatic mass segmentation in digital breast tomosynthesis via U-Net.","authors":"Ahmad Qasem, Genggeng Qin, Zhiguo Zhou","doi":"10.1117/1.JMI.11.2.024005","DOIUrl":"10.1117/1.JMI.11.2.024005","url":null,"abstract":"<p><strong>Purpose: </strong>The objective of this study was to develop a fully automatic mass segmentation method called AMS-U-Net for digital breast tomosynthesis (DBT), a popular breast cancer screening imaging modality. The aim was to address the challenges posed by the increasing number of slices in DBT, which leads to higher mass contouring workload and decreased treatment efficiency.</p><p><strong>Approach: </strong>The study used 50 slices from different DBT volumes for evaluation. The AMS-U-Net approach consisted of four stages: image pre-processing, AMS-U-Net training, image segmentation, and post-processing. The model performance was evaluated by calculating the true positive ratio (TPR), false positive ratio (FPR), F-score, intersection over union (IoU), and 95% Hausdorff distance (pixels) as they are appropriate for datasets with class imbalance.</p><p><strong>Results: </strong>The model achieved 0.911, 0.003, 0.911, 0.900, 5.82 for TPR, FPR, F-score, IoU, and 95% Hausdorff distance, respectively.</p><p><strong>Conclusions: </strong>The AMS-U-Net model demonstrated impressive visual and quantitative results, achieving high accuracy in mass segmentation without the need for human interaction. This capability has the potential to significantly increase clinical efficiency and workflow in DBT for breast cancer screening.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10960181/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140207950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpecReFlow: an algorithm for specular reflection restoration using flow-guided video completion. SpecReFlow:一种利用流引导视频完成镜面反射复原的算法。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-04-24 DOI: 10.1117/1.JMI.11.2.024012
Haoli Yin, Rachel Eimen, Daniel Moyer, Audrey K Bowden
{"title":"SpecReFlow: an algorithm for specular reflection restoration using flow-guided video completion.","authors":"Haoli Yin, Rachel Eimen, Daniel Moyer, Audrey K Bowden","doi":"10.1117/1.JMI.11.2.024012","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024012","url":null,"abstract":"<p><strong>Purpose: </strong>Specular reflections (SRs) are highlight artifacts commonly found in endoscopy videos that can severely disrupt a surgeon's observation and judgment. Despite numerous attempts to restore SR, existing methods are inefficient and time consuming and can lead to false clinical interpretations. Therefore, we propose the first complete deep-learning solution, SpecReFlow, to detect and restore SR regions from endoscopy video with spatial and temporal coherence.</p><p><strong>Approach: </strong>SpecReFlow consists of three stages: (1) an image preprocessing stage to enhance contrast, (2) a detection stage to indicate where the SR region is present, and (3) a restoration stage in which we replace SR pixels with an accurate underlying tissue structure. Our restoration approach uses optical flow to seamlessly propagate color and structure from other frames of the endoscopy video.</p><p><strong>Results: </strong>Comprehensive quantitative and qualitative tests for each stage reveal that our SpecReFlow solution performs better than previous detection and restoration methods. Our detection stage achieves a Dice score of 82.8% and a sensitivity of 94.6%, and our restoration stage successfully incorporates temporal information with spatial information for more accurate restorations than existing techniques.</p><p><strong>Conclusions: </strong>SpecReFlow is a first-of-its-kind solution that combines temporal and spatial information for effective detection and restoration of SR regions, surpassing previous methods relying on single-frame spatial information. Future work will look to optimizing SpecReFlow for real-time applications. SpecReFlow is a software-only solution for restoring image content lost due to SR, making it readily deployable in existing clinical settings to improve endoscopy video quality for accurate diagnosis and treatment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11042492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信