Journal of Medical Imaging最新文献

筛选
英文 中文
Applications of mixed reality with medical imaging for training and clinical practice. 混合现实与医学成像在培训和临床实践中的应用。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-26 DOI: 10.1117/1.JMI.11.6.062608
Alexa R Lauinger, Meagan McNicholas, Matthew Bramlet, Maria Bederson, Bradley P Sutton, Caroline G L Cao, Irfan S Ahmad, Carlos Brown, Shandra Jamison, Sarita Adve, John Vozenilek, Jim Rehg, Mark S Cohen
{"title":"Applications of mixed reality with medical imaging for training and clinical practice.","authors":"Alexa R Lauinger, Meagan McNicholas, Matthew Bramlet, Maria Bederson, Bradley P Sutton, Caroline G L Cao, Irfan S Ahmad, Carlos Brown, Shandra Jamison, Sarita Adve, John Vozenilek, Jim Rehg, Mark S Cohen","doi":"10.1117/1.JMI.11.6.062608","DOIUrl":"10.1117/1.JMI.11.6.062608","url":null,"abstract":"<p><strong>Purpose: </strong>This review summarizes the current use of extended reality (XR) including virtual reality (VR), mixed reality, and augmented reality (AR) in the medical field, ranging from medical imaging to training to preoperative planning. It covers the integration of these technologies into clinical practice and within medical training while discussing the challenges and future opportunities in this sphere. This will hopefully encourage more physicians to collaborate on integrating medicine and technology.</p><p><strong>Approach: </strong>The review was written by experts in the field based on their knowledge and on recent publications exploring the topic of extended realities in medicine.</p><p><strong>Results: </strong>Based on our findings, XR including VR, mixed reality, and AR are increasingly utilized within surgery both for preoperative planning and intraoperative procedures. These technologies are also promising means for improved education at every level of physician training. However, there are still barriers to the widespread adoption of VR, mixed reality, and AR, including human factors, technological challenges, and regulatory issues.</p><p><strong>Conclusions: </strong>Based on the current use of VR, mixed reality, and AR, it is likely that the use of these technologies will continue to grow over the next decade. To support the development and integration of XR into medicine, it is important for academic groups to collaborate with industrial groups and regulatory agencies in these endeavors. These joint projects will help address the current limitations and mutually benefit both fields.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062608"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11669596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study. 增强和虚拟现实成像用于心血管结构干预的协作规划:概念验证和验证研究。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-10-08 DOI: 10.1117/1.JMI.11.6.062606
Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene
{"title":"Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study.","authors":"Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene","doi":"10.1117/1.JMI.11.6.062606","DOIUrl":"10.1117/1.JMI.11.6.062606","url":null,"abstract":"<p><strong>Purpose: </strong>Virtual reality (VR) and augmented reality (AR) have led to significant advancements in cardiac preoperative planning, shaping the world in profound ways. A noticeable gap exists in the availability of a comprehensive multi-user, multi-device mixed reality application that can be used in a multidisciplinary team meeting.</p><p><strong>Approach: </strong>A multi-user, multi-device mixed reality application was developed, supporting AR and VR implementations. Technical validation involved a standardized testing protocol and comparison of AR and VR measurements regarding absolute error and time. Preclinical validation engaged experts in interventional cardiology, evaluating the clinical applicability prior to clinical validation. Clinical validation included patient-specific measurements for five patients in VR compared with standard computed tomography (CT) for preoperative planning. Questionnaires were used at all stages for subjective evaluation.</p><p><strong>Results: </strong>Technical validation, including 106 size measurements, demonstrated an absolute median error of 0.69 mm (0.25 to 1.18 mm) compared with ground truth. The time to complete the entire task was <math><mrow><mn>892</mn> <mo>±</mo> <mn>407</mn> <mtext>  </mtext> <mi>s</mi></mrow> </math> on average, with VR measurements being faster than AR ( <math><mrow><mn>804</mn> <mo>±</mo> <mn>483</mn></mrow> </math> versus <math><mrow><mn>957</mn> <mo>±</mo> <mn>257</mn> <mtext>  </mtext> <mi>s</mi></mrow> </math> , <math><mrow><mi>P</mi> <mo>=</mo> <mn>0.045</mn></mrow> </math> ). On clinical validation of five preoperative patients, there was no statistically significant difference between paired CT and VR measurements (0.58 [95% CI, <math><mrow><mo>-</mo> <mn>1.58</mn></mrow> </math> to 2.74], <math><mrow><mi>P</mi> <mo>=</mo> <mn>0.586</mn></mrow> </math> ). Questionnaires showcased unanimous agreement on the user-friendly nature, effectiveness, and clinical value.</p><p><strong>Conclusions: </strong>The mixed reality application, validated through technical, preclinical, and clinical assessments, demonstrates precision and user-friendliness. Further research of our application is needed to validate the generalizability and impact on patient outcomes.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062606"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarimetry terahertz imaging of human breast cancer surgical specimens. 人类乳腺癌手术标本的偏振太赫兹成像。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-05 DOI: 10.1117/1.JMI.11.6.065503
Nikita Gurjar, Keith Bailey, Magda El-Shenawee
{"title":"Polarimetry terahertz imaging of human breast cancer surgical specimens.","authors":"Nikita Gurjar, Keith Bailey, Magda El-Shenawee","doi":"10.1117/1.JMI.11.6.065503","DOIUrl":"10.1117/1.JMI.11.6.065503","url":null,"abstract":"<p><strong>Purpose: </strong>We investigate terahertz (THz) polarimetry imaging of seven human breast cancer surgical specimens. The goal is to enhance image contrast between adjacent tissue types of cancer, healthy collagen, and fat in excised breast tumors. Based on the biological perception of random growth of cancer and invasion of surrounding healthy tissues in the breast, we hypothesize that cancerous cells interact with the THz electric field in a different manner compared with healthy cells. This difference can be best captured using multiple polarizations instead of single polarization.</p><p><strong>Approach: </strong>Time domain pulsed signals are experimentally collected from each pixel of the specimen in horizontal-horizontal, vertical-horizontal, vertical-vertical, and horizontal-vertical polarizations. The time domain pulses are transformed to the frequency domain to obtain the power spectra and 16 Mueller matrix images. The whole-slide pathology imaging was used to interpret and label all images.</p><p><strong>Results: </strong>The results of the cross and co-polarization power spectrum images demonstrated a strong dependency on the tissue orientation with respect to the emitted and detected electric fields. At the 130-deg rotation angle of the scanned samples, the detector showed the strongest reflected signal in cross-polarization. Furthermore, the Mueller matrix images consistently demonstrated patterns in fresh and block tissues confirming the differentiation between tissue types in breast tumor specimens.</p><p><strong>Conclusions: </strong>THz polarimetry imaging shows a potential for improving image contrast in excised tumor tissues compared with single polarization imaging. Cross-polarization signals demonstrated smaller amplitudes compared with co-polarized signals. However, averaging the signal during measurements has tremendously improved the image. Furthermore, in post-processing, averaging the frequency domain images and the Mueller matrix elements with respect to frequency has led to better image contrast. Some patterns in the Mueller matrix images were difficult to interpret leading to the necessity of more investigation of the Mueller matrix and its physiological interpretation of breast tumor tissues.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"065503"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11619717/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of retraining and data partitions on the generalizability of a deep learning model in the task of COVID-19 classification on chest radiographs. 再训练和数据分割对深度学习模型在胸片COVID-19分类任务中的泛化性的影响
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-26 DOI: 10.1117/1.JMI.11.6.064503
Mena Shenouda, Heather M Whitney, Maryellen L Giger, Samuel G Armato
{"title":"Impact of retraining and data partitions on the generalizability of a deep learning model in the task of COVID-19 classification on chest radiographs.","authors":"Mena Shenouda, Heather M Whitney, Maryellen L Giger, Samuel G Armato","doi":"10.1117/1.JMI.11.6.064503","DOIUrl":"10.1117/1.JMI.11.6.064503","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate the impact of different model retraining schemes and data partitioning on model performance in the task of COVID-19 classification on standard chest radiographs (CXRs), in the context of model generalizability.</p><p><strong>Approach: </strong>Two datasets from the same institution were used: Set A (9860 patients, collected from 02/20/2020 to 02/03/2021) and Set B (5893 patients, collected from 03/15/2020 to 01/01/2022). An original deep learning (DL) model trained and tested in the task of COVID-19 classification using the initial partition of Set A achieved an area under the curve (AUC) value of 0.76, whereas Set B yielded a significantly lower value of 0.67. To explore this discrepancy, four separate strategies were undertaken on the original model: (1) retrain using Set B, (2) fine-tune using Set B, (3) <math><mrow><mi>L</mi> <mn>2</mn></mrow> </math> regularization, and (4) repartition of the training set from Set A 200 times and report AUC values.</p><p><strong>Results: </strong>The model achieved the following AUC values (95% confidence interval) for the four methods: (1) 0.61 [0.56, 0.66]; (2) 0.70 [0.66, 0.73], both on Set B; (3) 0.76 [0.72, 0.79] on the initial test partition of Set A and 0.68 [0.66, 0.70] on Set B; and (4) <math><mrow><mn>0.71</mn> <mo>±</mo> <mn>0.013</mn></mrow> </math> on repartitions of Set A. The lowest AUC value (0.66 [0.62, 0.69]) of the Set A repartitions was no longer significantly different from the initial 0.67 achieved on Set B.</p><p><strong>Conclusions: </strong>Different data repartitions of the same dataset used to train a DL model demonstrated significantly different performance values that helped explain the discrepancy between Set A and Set B and further demonstrated the limitations of model generalizability.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064503"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11670362/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Centerline-guided reinforcement learning model for pancreatic duct identifications. 用于胰腺导管识别的中心线引导强化学习模型。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-11-08 DOI: 10.1117/1.JMI.11.6.064002
Sepideh Amiri, Reza Karimzadeh, Tomaž Vrtovec, Erik Gudmann Steuble Brandt, Henrik S Thomsen, Michael Brun Andersen, Christoph Felix Müller, Anders Bertil Rodell, Bulat Ibragimov
{"title":"Centerline-guided reinforcement learning model for pancreatic duct identifications.","authors":"Sepideh Amiri, Reza Karimzadeh, Tomaž Vrtovec, Erik Gudmann Steuble Brandt, Henrik S Thomsen, Michael Brun Andersen, Christoph Felix Müller, Anders Bertil Rodell, Bulat Ibragimov","doi":"10.1117/1.JMI.11.6.064002","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064002","url":null,"abstract":"<p><strong>Purpose: </strong>Pancreatic ductal adenocarcinoma is forecast to become the second most significant cause of cancer mortality as the number of patients with cancer in the main duct of the pancreas grows, and measurement of the pancreatic duct diameter from medical images has been identified as relevant for its early diagnosis.</p><p><strong>Approach: </strong>We propose an automated pancreatic duct centerline tracing method from computed tomography (CT) images that is based on deep reinforcement learning, which employs an artificial agent to interact with the environment and calculates rewards by combining the distances from the target and the centerline. A deep neural network is implemented to forecast step-wise values for each potential action. With the help of this mechanism, the agent can probe along the pancreatic duct centerline using the best possible navigational path. To enhance the tracing accuracy, we employ landmark-based registration, which enables the generation of a probability map of the pancreatic duct. Subsequently, we utilize a gradient-based method on the registered data to extract a probability map specifically indicating the centerline of the pancreatic duct.</p><p><strong>Results: </strong>Three datasets with a total of 115 CT images were used to evaluate the proposed method. Using image hold-out from the first two datasets, the method performance was 2.0, 4.0, and 2.1 mm measured in terms of the mean detection error, Hausdorff distance (HD), and root mean squared error (RMSE), respectively. Using the first two datasets for training and the third one for testing, the method accuracy was 2.2, 4.9, and 2.6 mm measured in terms of the mean detection error, HD, and RMSE, respectively.</p><p><strong>Conclusions: </strong>We present an algorithm for automated pancreatic duct centerline tracing using deep reinforcement learning. We observe that validation on an external dataset confirms the potential for practical utilization of the presented method.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064002"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543826/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of monocular and binocular contrast perception on virtual reality head-mounted displays. 评估虚拟现实头戴式显示器上的单眼和双眼对比度感知。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-09-14 DOI: 10.1117/1.JMI.11.6.062605
Khushi Bhansali, Miguel A Lago, Ryan Beams, Chumin Zhao
{"title":"Evaluation of monocular and binocular contrast perception on virtual reality head-mounted displays.","authors":"Khushi Bhansali, Miguel A Lago, Ryan Beams, Chumin Zhao","doi":"10.1117/1.JMI.11.6.062605","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.062605","url":null,"abstract":"<p><strong>Purpose: </strong>Visualization of medical images on a virtual reality (VR) head-mounted display (HMD) requires binocular fusion of a stereoscopic pair of graphical views. However, current image quality assessment on VR HMDs for medical applications has been primarily limited to time-consuming monocular optical bench measurement on a single eyepiece.</p><p><strong>Approach: </strong>As an alternative to optical bench measurement to quantify the image quality on VR HMDs, we developed a WebXR test platform to perform contrast perceptual experiments that can be used for binocular image quality assessment. We obtained monocular and binocular contrast sensitivity responses (CSRs) from participants on a Meta Quest 2 VR HMD using varied interpupillary distance (IPD) configurations.</p><p><strong>Results: </strong>The perceptual result shows that contrast perception on VR HMDs is primarily affected by optical aberration of the VR HMD. As a result, monocular CSR degrades at a high spatial frequency greater than 4 cycles per degree when gazing at the periphery of the display field of view, especially for mismatched IPD settings consistent with optical bench measurements. On the contrary, binocular contrast perception is dominated by the monocular view with superior image quality measured by the contrast.</p><p><strong>Conclusions: </strong>We developed a test platform to investigate monocular and binocular contrast perception by performing perceptual experiments. The test method can be used to evaluate monocular and/or binocular image quality on VR HMDs for potential medical applications without extensive optical bench measurements.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062605"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11401613/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodality model investigating the impact of brain atlases, connectivity measures, and dimensionality reduction techniques on Attention Deficit Hyperactivity Disorder diagnosis using resting state functional connectivity. 多模态模型研究脑图谱、连通性测量和降维技术对静息状态功能连通性诊断注意缺陷多动障碍的影响。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-20 DOI: 10.1117/1.JMI.11.6.064502
Deepika, Meghna Sharma, Shaveta Arora
{"title":"Multimodality model investigating the impact of brain atlases, connectivity measures, and dimensionality reduction techniques on Attention Deficit Hyperactivity Disorder diagnosis using resting state functional connectivity.","authors":"Deepika, Meghna Sharma, Shaveta Arora","doi":"10.1117/1.JMI.11.6.064502","DOIUrl":"10.1117/1.JMI.11.6.064502","url":null,"abstract":"<p><strong>Purpose: </strong>Various brain atlases are available to parcellate and analyze brain connections. Most traditional machine learning and deep learning studies analyzing Attention Deficit Hyperactivity Disorder (ADHD) have used either one or two brain atlases for their analysis. However, there is a lack of comprehensive research evaluating the impact of different brain atlases and associated factors such as connectivity measures and dimension reduction techniques on ADHD diagnosis.</p><p><strong>Approach: </strong>This paper proposes an efficient and robust multimodality model that investigates various brain atlases utilizing different parcellation strategies and scales. Thirty combinations of six brain atlases and five distinct machine learning classifiers with optimized hyperparameters are implemented to identify the most promising brain atlas for ADHD diagnosis. These outcomes are validated using the statistical Friedman test. To enhance comprehensiveness, the impact of three different connectivity measures, each representing unique facets of brain connectivity, is also analyzed. Considering the extensive complexity of brain interconnections, the effect of various dimension reduction techniques on classification performance and execution time is also analyzed. The final model is integrated with phenotypic data to create an efficient multimodal ADHD classification model.</p><p><strong>Results: </strong>Experimental results on the ADHD-200 dataset demonstrate a significant variation in classification performance introduced by each factor. The proposed model outperforms many state-of-the-art ADHD approaches and achieves an accuracy of 77.59%, an area under the curve (AUC) score of 77.25% and an <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of 75.43%.</p><p><strong>Conclusions: </strong>The proposed model offers clear guidance for researchers, helping to standardize atlas selection and associated factors and improve the consistency and accuracy of ADHD studies for more reliable clinical applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11661636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142877992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterization of arteriosclerosis based on computer-aided measurements of intra-arterial thickness. 根据计算机辅助测量动脉内厚度确定动脉硬化的特征。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-09-01 Epub Date: 2024-10-10 DOI: 10.1117/1.JMI.11.5.057501
Jin Zhou, Xiang Li, Dawit Demeke, Timothy A Dinh, Yingbao Yang, Andrew R Janowczyk, Jarcy Zee, Lawrence Holzman, Laura Mariani, Krishnendu Chakrabarty, Laura Barisoni, Jeffrey B Hodgin, Kyle J Lafata
{"title":"Characterization of arteriosclerosis based on computer-aided measurements of intra-arterial thickness.","authors":"Jin Zhou, Xiang Li, Dawit Demeke, Timothy A Dinh, Yingbao Yang, Andrew R Janowczyk, Jarcy Zee, Lawrence Holzman, Laura Mariani, Krishnendu Chakrabarty, Laura Barisoni, Jeffrey B Hodgin, Kyle J Lafata","doi":"10.1117/1.JMI.11.5.057501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.057501","url":null,"abstract":"<p><strong>Purpose: </strong>Our purpose is to develop a computer vision approach to quantify intra-arterial thickness on digital pathology images of kidney biopsies as a computational biomarker of arteriosclerosis.</p><p><strong>Approach: </strong>The severity of the arteriosclerosis was scored (0 to 3) in 753 arteries from 33 trichrome-stained whole slide images (WSIs) of kidney biopsies, and the outer contours of the media, intima, and lumen were manually delineated by a renal pathologist. We then developed a multi-class deep learning (DL) framework for segmenting the different intra-arterial compartments (training dataset: 648 arteries from 24 WSIs; testing dataset: 105 arteries from 9 WSIs). Subsequently, we employed radial sampling and made measurements of media and intima thickness as a function of spatially encoded polar coordinates throughout the artery. Pathomic features were extracted from the measurements to collectively describe the arterial wall characteristics. The technique was first validated through numerical analysis of simulated arteries, with systematic deformations applied to study their effect on arterial thickness measurements. We then compared these computationally derived measurements with the pathologists' grading of arteriosclerosis.</p><p><strong>Results: </strong>Numerical validation shows that our measurement technique adeptly captured the decreasing smoothness in the intima and media thickness as the deformation increases in the simulated arteries. Intra-arterial DL segmentations of media, intima, and lumen achieved Dice scores of 0.84, 0.78, and 0.86, respectively. Several significant associations were identified between arteriosclerosis grade and pathomic features using our technique (e.g., intima-media ratio average [ <math><mrow><mi>τ</mi> <mo>=</mo> <mn>0.52</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.0001</mn></mrow> </math> ]) through Kendall's tau analysis.</p><p><strong>Conclusions: </strong>We developed a computer vision approach to computationally characterize intra-arterial morphology on digital pathology images and demonstrate its feasibility as a potential computational biomarker of arteriosclerosis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"057501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466048/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the use of signal detection information in supervised learning-based image denoising with consideration of task-shift. 研究在基于监督学习的图像去噪中使用信号检测信息,并考虑任务转移。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-09-01 Epub Date: 2024-09-05 DOI: 10.1117/1.JMI.11.5.055501
Kaiyan Li, Hua Li, Mark A Anastasio
{"title":"Investigating the use of signal detection information in supervised learning-based image denoising with consideration of task-shift.","authors":"Kaiyan Li, Hua Li, Mark A Anastasio","doi":"10.1117/1.JMI.11.5.055501","DOIUrl":"10.1117/1.JMI.11.5.055501","url":null,"abstract":"<p><strong>Purpose: </strong>Recently, learning-based denoising methods that incorporate task-relevant information into the training procedure have been developed to enhance the utility of the denoised images. However, this line of research is relatively new and underdeveloped, and some fundamental issues remain unexplored. Our purpose is to yield insights into general issues related to these task-informed methods. This includes understanding the impact of denoising on objective measures of image quality (IQ) when the specified task at inference time is different from that employed for model training, a phenomenon we refer to as \"task-shift.\"</p><p><strong>Approach: </strong>A virtual imaging test bed comprising a stylized computational model of a chest X-ray computed tomography imaging system was employed to enable a controlled and tractable study design. A canonical, fully supervised, convolutional neural network-based denoising method was purposely adopted to understand the underlying issues that may be relevant to a variety of applications and more advanced denoising or image reconstruction methods. Signal detection and signal detection-localization tasks under signal-known-statistically with background-known-statistically conditions were considered, and several distinct types of numerical observers were employed to compute estimates of the task performance. Studies were designed to reveal how a task-informed transfer-learning approach can influence the tradeoff between conventional and task-based measures of image quality within the context of the considered tasks. In addition, the impact of task-shift on these image quality measures was assessed.</p><p><strong>Results: </strong>The results indicated that certain tradeoffs can be achieved such that the resulting AUC value was significantly improved and the degradation of physical IQ measures was statistically insignificant. It was also observed that introducing task-shift degrades the task performance as expected. The degradation was significant when a relatively simple task was considered for network training and observer performance on a more complex one was assessed at inference time.</p><p><strong>Conclusions: </strong>The presented results indicate that the task-informed training method can improve the observer performance while providing control over the tradeoff between traditional and task-based measures of image quality. The behavior of a task-informed model fine-tuning procedure was demonstrated, and the impact of task-shift on task-based image quality measures was investigated.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"055501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11376226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing mammography interpretation education: leveraging deep learning for cohort-specific error detection to enhance radiologist training. 优化乳腺 X 射线摄影解读教育:利用深度学习进行队列特定错误检测,以加强放射医师培训。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-09-01 Epub Date: 2024-10-03 DOI: 10.1117/1.JMI.11.5.055502
Xuetong Tao, Warren M Reed, Tong Li, Patrick C Brennan, Ziba Gandomkar
{"title":"Optimizing mammography interpretation education: leveraging deep learning for cohort-specific error detection to enhance radiologist training.","authors":"Xuetong Tao, Warren M Reed, Tong Li, Patrick C Brennan, Ziba Gandomkar","doi":"10.1117/1.JMI.11.5.055502","DOIUrl":"10.1117/1.JMI.11.5.055502","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate interpretation of mammograms presents challenges. Tailoring mammography training to reader profiles holds the promise of an effective strategy to reduce these errors. This proof-of-concept study investigated the feasibility of employing convolutional neural networks (CNNs) with transfer learning to categorize regions associated with false-positive (FP) errors within screening mammograms into categories of \"low\" or \"high\" likelihood of being a false-positive detection for radiologists sharing similar geographic characteristics.</p><p><strong>Approach: </strong>Mammography test sets assessed by two geographically distant cohorts of radiologists (cohorts A and B) were collected. FP patches within these mammograms were segmented and categorized as \"difficult\" or \"easy\" based on the number of readers committing FP errors. Patches outside 1.5 times the interquartile range above the upper quartile were labeled as difficult, whereas the remaining patches were labeled as easy. Using transfer learning, a patch-wise CNN model for binary patch classification was developed utilizing ResNet as the feature extractor, with modified fully connected layers for the target task. Model performance was assessed using 10-fold cross-validation.</p><p><strong>Results: </strong>Compared with other architectures, the transferred ResNet-50 achieved the highest performance, obtaining receiver operating characteristics area under the curve values of 0.933 ( <math><mrow><mo>±</mo> <mn>0.012</mn></mrow> </math> ) and 0.975 ( <math><mrow><mo>±</mo> <mn>0.011</mn></mrow> </math> ) on the validation sets for cohorts A and B, respectively.</p><p><strong>Conclusions: </strong>The findings highlight the feasibility of employing CNN-based transfer learning to predict the difficulty levels of local FP patches in screening mammograms for specific radiologist cohort with similar geographic characteristics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"055502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11447382/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信