Haoqi Wang, Xiomara T Gonzalez, Gabriela A Renta-López, Mary Catherine Bordes, Michael C Hout, Seung W Choi, Gregory P Reece, Mia K Markey
{"title":"Breast cancer survivors' perceptual map of breast reconstruction appearance outcomes.","authors":"Haoqi Wang, Xiomara T Gonzalez, Gabriela A Renta-López, Mary Catherine Bordes, Michael C Hout, Seung W Choi, Gregory P Reece, Mia K Markey","doi":"10.1117/1.JMI.12.5.051802","DOIUrl":"10.1117/1.JMI.12.5.051802","url":null,"abstract":"<p><strong>Purpose: </strong>It is often hard for patients to articulate their expectations about breast reconstruction appearance outcomes to their providers. Our overarching goal is to develop a tool to help patients visually express what they expect to look like after reconstruction. We aim to comprehensively understand how breast cancer survivors perceive diverse breast appearance states by mapping them onto a low-dimensional Euclidean space, which simplifies the complex information about perceptual similarity relationships into a more interpretable form.</p><p><strong>Approach: </strong>We recruited breast cancer survivors and conducted observer experiments to assess the visual similarities among clinical photographs depicting a range of appearances of the torso relevant to breast reconstruction. Then, we developed a perceptual map to illuminate how breast cancer survivors perceive and distinguish among these appearance states.</p><p><strong>Results: </strong>We sampled 100 photographs as stimuli and recruited 34 breast cancer survivors locally. The resulting perceptual map, constructed in two dimensions, offers valuable insights into factors influencing breast cancer survivors' perceptions of breast reconstruction outcomes. Our findings highlight specific aspects, such as the number of nipples, symmetry, ptosis, scars, and breast shape, that emerge as particularly noteworthy for breast cancer survivors.</p><p><strong>Conclusions: </strong>Analysis of the perceptual map identified factors associated with breast cancer survivors' perceptions of breast appearance states that should be emphasized in the appearance consultation process. The perceptual map could be used to assist patients in visually expressing what they expect to look like. Our study lays the groundwork for evaluating interventions intended to help patients form realistic expectations.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051802"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11921042/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samual K Zenger, Rishabh Agarwal, William F Auffermann
{"title":"Using a limited field of view to improve training for pulmonary nodule detection on radiographs.","authors":"Samual K Zenger, Rishabh Agarwal, William F Auffermann","doi":"10.1117/1.JMI.12.5.051804","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.051804","url":null,"abstract":"<p><strong>Purpose: </strong>Perceptual error is a significant cause of medical errors in radiology. Given the amount of information in a medical image, an image interpreter may become distracted by information unrelated to their search pattern. This may be especially challenging for novices. We aim to examine teaching medical trainees to evaluate chest radiographs (CXRs) for pulmonary nodules on limited field-of-view (LFOV) images, with the field of view (FOV) restricted to the lungs and mediastinum.</p><p><strong>Approach: </strong>Healthcare trainees with limited exposure to interpreting images were asked to identify pulmonary nodules on CXRs, half of which contained nodules. The control and experimental groups evaluated two sets of CXRs. After the first set, the experimental group was trained to evaluate LFOV images, and both groups were again asked to assess CXRs for pulmonary nodules. Participants were given surveys after this educational session to determine their thoughts about the training and symptoms of computer vision syndrome (CVS).</p><p><strong>Results: </strong>There was a significant improvement in performance in pulmonary nodule identification for both the experimental and control groups, but the improvement was more considerable in the experimental group ( <math><mrow><mi>p</mi> <mtext>-</mtext> <mtext>value</mtext> <mo>=</mo> <mn>0.022</mn></mrow> </math> ). Survey responses were uniformly positive, and each question was statistically significant (all <math><mrow><mi>p</mi> <mtext>-</mtext> <mtext>values</mtext> <mo><</mo> <mn>0.001</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>Our results show that using LFOV images may be helpful when teaching trainees specific high-yield perceptual tasks, such as nodule identification. The use of LFOV images was associated with reduced symptoms of CVS.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051804"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12023444/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144021338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vaibhav Sharma, Alina Jade Barnett, Julia Yang, Sangwook Cheon, Giyoung Kim, Fides Regina Schwartz, Avivah Wang, Neal Hall, Lars Grimm, Chaofan Chen, Joseph Y Lo, Cynthia Rudin
{"title":"Improving annotation efficiency for fully labeling a breast mass segmentation dataset.","authors":"Vaibhav Sharma, Alina Jade Barnett, Julia Yang, Sangwook Cheon, Giyoung Kim, Fides Regina Schwartz, Avivah Wang, Neal Hall, Lars Grimm, Chaofan Chen, Joseph Y Lo, Cynthia Rudin","doi":"10.1117/1.JMI.12.3.035501","DOIUrl":"10.1117/1.JMI.12.3.035501","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer remains a leading cause of death for women. Screening programs are deployed to detect cancer at early stages. One current barrier identified by breast imaging researchers is a shortage of labeled image datasets. Addressing this problem is crucial to improve early detection models. We present an active learning (AL) framework for segmenting breast masses from 2D digital mammography, and we publish labeled data. Our method aims to reduce the input needed from expert annotators to reach a fully labeled dataset.</p><p><strong>Approach: </strong>We create a dataset of 1136 mammographic masses with pixel-wise binary segmentation labels, with the test subset labeled independently by two different teams. With this dataset, we simulate a human annotator within an AL framework to develop and compare AI-assisted labeling methods, using a discriminator model and a simulated oracle to collect acceptable segmentation labels. A UNet model is retrained on these labels, generating new segmentations. We evaluate various oracle heuristics using the percentage of segmentations that the oracle relabels and measure the quality of the proposed labels by evaluating the intersection over union over a validation dataset.</p><p><strong>Results: </strong>Our method reduces expert annotator input by 44%. We present a dataset of 1136 binary segmentation labels approved by board-certified radiologists and make the 143-image validation set public for comparison with other researchers' methods.</p><p><strong>Conclusions: </strong>We demonstrate that AL can significantly improve the efficiency and time-effectiveness of creating labeled mammogram datasets. Our framework facilitates the development of high-quality datasets while minimizing manual effort in the domain of digital mammography.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"035501"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyang Wang, Xiumei Li, Ruyu Liu, Meixi Wang, Junmei Sun
{"title":"DECE-Net: a dual-path encoder network with contour enhancement for pneumonia lesion segmentation.","authors":"Tianyang Wang, Xiumei Li, Ruyu Liu, Meixi Wang, Junmei Sun","doi":"10.1117/1.JMI.12.3.034503","DOIUrl":"10.1117/1.JMI.12.3.034503","url":null,"abstract":"<p><strong>Purpose: </strong>Early-stage pneumonia is not easily detected, leading to many patients missing the optimal treatment window. This is because segmenting lesion areas from CT images presents several challenges, including low-intensity contrast between the lesion and normal areas, as well as variations in the shape and size of lesion areas. To overcome these challenges, we propose a segmentation network called DECE-Net to segment the pneumonia lesions from CT images automatically.</p><p><strong>Approach: </strong>The DECE-Net adds an extra encoder path to the U-Net, where one encoder path extracts the features of the original CT image with the attention multi-scale feature fusion module, and the other encoder path extracts the contour features in the CT contour image with the contour feature extraction module to compensate and enhance the boundary information that is lost in the downsampling process. The network further fuses the low-level features from both encoder paths through the feature fusion attention connection module and connects them to the upsampled high-level features to replace the skip connections in the U-Net. Finally, multi-point deep supervision is applied to the segmentation results at each scale to improve segmentation accuracy.</p><p><strong>Results: </strong>We evaluate the DECE-Net using four public COVID-19 segmentation datasets. The mIoU results for the four datasets are 80.76%, 84.59%, 84.41%, and 78.55%, respectively.</p><p><strong>Conclusions: </strong>The experimental results indicate that the proposed DECE-Net achieves state-of-the-art performance, especially in the precise segmentation of small lesion areas.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034503"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12101900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harshani Fonseka, Soheil Varastehpour, Masoud Shakiba, Ehsan Golkar, David Tien
{"title":"Convolutional variational auto-encoder and vision transformer hybrid approach for enhanced early Alzheimer's detection.","authors":"Harshani Fonseka, Soheil Varastehpour, Masoud Shakiba, Ehsan Golkar, David Tien","doi":"10.1117/1.JMI.12.3.034501","DOIUrl":"10.1117/1.JMI.12.3.034501","url":null,"abstract":"<p><strong>Purpose: </strong>Alzheimer's disease (AD) is becoming more prevalent among the elderly, with projections indicating that it will affect a significantly large population in the future. Regardless of substantial research efforts and investments focused on exploring the underlying biological factors, a definitive cure has yet to be discovered. The currently available treatments are only effective in slowing disease progression if it is identified in the early stages of the disease. Therefore, early diagnosis has become critical in treating AD.</p><p><strong>Approach: </strong>Recently, the use of deep learning techniques has demonstrated remarkable improvement in enhancing the precision and speed of automatic AD diagnosis through medical image analysis. We propose a hybrid model that integrates a convolutional variational auto-encoder (CVAE) with a vision transformer (ViT). During the encoding phase, the CVAE captures key features from the MRI scans, whereas the decoding phase reduces irrelevant details in MRIs. These refined inputs enhance the ViT's ability to analyze complex patterns through its multihead attention mechanism.</p><p><strong>Results: </strong>The model was trained and evaluated using 14,000 structural MRI samples from the ADNI and SCAN databases. Compared with three benchmark methods and previous studies with Alzheimer's classification techniques, our approach achieved a significant improvement, with a test accuracy of 93.3%.</p><p><strong>Conclusions: </strong>Through this research, we identified the potential of the CVAE-ViT hybrid approach in detecting minor structural abnormalities related to AD. Integrating unsupervised feature extraction via CVAE can significantly enhance transformer-based models in distinguishing between stages of cognitive impairment, thereby identifying early indicators of AD.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034501"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara Rezvanjou, Amir Moslemi, Samuel Peterson, Wan-Cheng Tan, James C Hogg, Jean Bourbeau, Joseph M Reinhardt, Miranda Kirby
{"title":"Classifying chronic obstructive pulmonary disease status using computed tomography imaging and convolutional neural networks: comparison of model input image types and training data severity.","authors":"Sara Rezvanjou, Amir Moslemi, Samuel Peterson, Wan-Cheng Tan, James C Hogg, Jean Bourbeau, Joseph M Reinhardt, Miranda Kirby","doi":"10.1117/1.JMI.12.3.034502","DOIUrl":"10.1117/1.JMI.12.3.034502","url":null,"abstract":"<p><strong>Purpose: </strong>Convolutional neural network (CNN)-based models using computed tomography images can classify chronic obstructive pulmonary disease (COPD) with high performance, but various input image types have been investigated, and it is unclear what image types are optimal. We propose a 2D airway-optimized topological multiplanar reformat (tMPR) input image and compare its performance with established 2D/3D input image types for COPD classification. As a secondary aim, we examined the impact of training on a dataset with predominantly mild COPD cases and testing on a more severe dataset to assess whether it improves generalizability.</p><p><strong>Approach: </strong>CanCOLD study participants were used for training/internal testing; SPIROMICS participants were used for external testing. Several 2D/3D input image types were adapted from the literature. In the proposed models, 2D airway-optimized tMPR images (to convey shape and interior/contextual information) and 3D output fusion of axial/sagittal/coronal images were investigated. The area-under-the-receiver-operator-curve (AUC) was used to evaluate model performance and Brier scores were used to evaluate model calibration. To further examine how training dataset severity impacts generalization, we compared model performance when trained on the milder CanCOLD dataset versus the more severe SPIROMICS dataset, and vice versa.</p><p><strong>Results: </strong>A total of <math><mrow><mi>n</mi> <mo>=</mo> <mn>742</mn></mrow> </math> CanCOLD participants were used for training/validation and <math><mrow><mi>n</mi> <mo>=</mo> <mn>309</mn></mrow> </math> for testing; <math><mrow><mi>n</mi> <mo>=</mo> <mn>448</mn></mrow> </math> SPIROMICS participants were used for external testing. For the CanCOLD and SPIROMICS test set, the proposed 2D tMPR on its own (CanCOLD: <math><mrow><mi>AUC</mi> <mo>=</mo> <mn>0.79</mn></mrow> </math> ; SPIROMICS: <math><mrow><mi>AUC</mi> <mo>=</mo> <mn>0.94</mn></mrow> </math> ) and combined with the 3D axial/coronal/sagittal lung view (CanCOLD: <math><mrow><mi>AUC</mi> <mo>=</mo> <mn>0.82</mn></mrow> </math> ; SPIROMICS: <math><mrow><mi>AUC</mi> <mo>=</mo> <mn>0.93</mn></mrow> </math> ) had the highest performance. The combined 2D tMPR and 3D axial/coronal/sagittal lung view had the lowest Brier score (CanCOLD: score = 0.16; SPIROMICS: score = 0.24). Conversely, using SPIROMICS for training/testing and CanCOLD for external testing resulted in lower performance when tested on CanCOLD for 2D tMPR on its own (SPIROMICS: AUC = 0.92; CanCOLD: AUC = 0.74) and when combined with the 3D axial/coronal/sagittal lung view (SPIROMICS: <math><mrow><mi>AUC</mi> <mo>=</mo> <mn>0.92</mn></mrow> </math> ; CanCOLD: <math><mrow><mi>AUC</mi> <mo>=</mo> <mn>0.75</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>The CNN-based model with the combined 2D tMPR images and 3D lung view as input image types had the highest performance for COPD classification, highlighting the imp","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034502"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12097752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alain Chen, Shuo Han, Soonam Lee, Chichen Fu, Changye Yang, Liming Wu, Seth Winfree, Kenneth W Dunn, Paul Salama, Edward J Delp
{"title":"Distributed and networked analysis of volumetric image data for remote collaboration of microscopy image analysis.","authors":"Alain Chen, Shuo Han, Soonam Lee, Chichen Fu, Changye Yang, Liming Wu, Seth Winfree, Kenneth W Dunn, Paul Salama, Edward J Delp","doi":"10.1117/1.JMI.12.2.024001","DOIUrl":"10.1117/1.JMI.12.2.024001","url":null,"abstract":"<p><strong>Purpose: </strong>The advancement of high-content optical microscopy has enabled the acquisition of very large three-dimensional (3D) image datasets. The analysis of these image volumes requires more computational resources than a biologist may have access to in typical desktop or laptop computers. This is especially true if machine learning tools are being used for image analysis. With the increased amount of data analysis and computational complexity, there is a need for a more accessible, easy-to-use, and efficient network-based 3D image processing system. The distributed and networked analysis of volumetric image data (DINAVID) system was developed to enable remote analysis of 3D microscopy images for biologists.</p><p><strong>Approach: </strong>We present an overview of the DINAVID system and compare it to other tools currently available for microscopy image analysis. DINAVID is designed using open-source tools and has two main sub-systems, a computational system for 3D microscopy image processing and analysis and a 3D visualization system.</p><p><strong>Results: </strong>DINAVID is a network-based system with a simple web interface that allows biologists to upload 3D volumes for analysis and visualization. DINAVID enables the image access model of a center hosting image volumes and remote users analyzing those volumes, without the need for remote users to manage any computational resources.</p><p><strong>Conclusions: </strong>The DINAVID system, designed and developed using open-source tools, enables biologists to analyze and visualize 3D microscopy volumes remotely without the need to manage computational resources. DINAVID also provides several image analysis tools, including pre-processing and several segmentation models.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11895998/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SAM-MedUS: a foundational model for universal ultrasound image segmentation.","authors":"Feng Tian, Jintao Zhai, Jinru Gong, Weirui Lei, Shuai Chang, Fangfang Ju, Shengyou Qian, Xiao Zou","doi":"10.1117/1.JMI.12.2.027001","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.027001","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of ultrasound images for medical diagnosis, monitoring, and research is crucial, and although existing methods perform well, they are limited by specific organs, tumors, and image devices. Applications of the Segment Anything Model (SAM), such as SAM-med2d, use a large number of medical datasets that contain only a small fraction of the ultrasound medical images.</p><p><strong>Approach: </strong>In this work, we proposed a SAM-MedUS model for generic ultrasound image segmentation that utilizes the latest publicly available ultrasound image dataset to create a diverse dataset containing eight site categories for training and testing. We integrated ConvNext V2 and CM blocks in the encoder for better global context extraction. In addition, a boundary loss function is used to improve the segmentation of fuzzy boundaries and low-contrast ultrasound images.</p><p><strong>Results: </strong>Experimental results show that SAM-MedUS outperforms recent methods on multiple ultrasound datasets. For the more easily datasets such as the adult kidney, it achieves 87.93% IoU and 93.58% dice, whereas for more complex ones such as the infant vein, IoU and dice reach 62.31% and 78.93%, respectively.</p><p><strong>Conclusions: </strong>We collected and collated an ultrasound dataset of multiple different site types to achieve uniform segmentation of ultrasound images. In addition, the use of additional auxiliary branches ConvNext V2 and CM block enhances the ability of the model to extract global information and the use of boundary loss allows the model to exhibit robust performance and excellent generalization ability.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"027001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11865838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Policelli, David DeVries, Joanna Laba, Andrew Leung, Terence Tang, Ali Albweady, Ghada Alqaidy, Aaron D Ward
{"title":"Prediction of brain metastasis progression after stereotactic radiosurgery: sensitivity to changing the definition of progression.","authors":"Robert Policelli, David DeVries, Joanna Laba, Andrew Leung, Terence Tang, Ali Albweady, Ghada Alqaidy, Aaron D Ward","doi":"10.1117/1.JMI.12.2.024504","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.024504","url":null,"abstract":"<p><strong>Purpose: </strong>Machine learning (ML) has been used to predict tumor progression post-stereotactic radiosurgery (SRS) based on pre-treatment MRI for brain metastasis (BM) patients, but there is variability in the definition of what constitutes progression. We aim to measure the magnitude of the change of performance of an ML model predicting post-SRS progression when various definitions of progression were used.</p><p><strong>Approach: </strong>We collected pre- and post-SRS contrast-enhanced T1-weighted MRI scans from 62 BM patients ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>115</mn></mrow> </math> BMs). We trained a random decision forest model using radiomic features extracted from pre-SRS scans to predict progression versus non-progression for each BM. We varied the definition of progression by changing (1) the follow-up period ( <math><mrow><mo><</mo> <mn>9</mn></mrow> </math> , <math><mrow><mo><</mo> <mn>12</mn></mrow> </math> , <math><mrow><mo><</mo> <mn>15</mn></mrow> </math> , <math><mrow><mo><</mo> <mn>18</mn></mrow> </math> , or <math><mrow><mo><</mo> <mn>24</mn></mrow> </math> months); (2) the size change metric denoting progression ( <math><mrow><mo>≥</mo> <mn>10</mn> <mo>%</mo></mrow> </math> , <math><mrow><mo>≥</mo> <mn>15</mn> <mo>%</mo></mrow> </math> , <math><mrow><mo>≥</mo> <mn>20</mn> <mo>%</mo></mrow> </math> , or <math><mrow><mo>≥</mo> <mn>25</mn> <mo>%</mo></mrow> </math> in volume) or response assessment in neuro-oncology BM diameter ( <math><mrow><mo>≥</mo> <mn>20</mn> <mo>%</mo></mrow> </math> ); and (3) whether BMs with treatment-related size changes (TRSCs) (pseudo-progression and/or radiation-necrosis) were labeled as progression. We measured performance using the area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>When we varied the follow-up period, size change metric, and TRSC labeling, the AUCs had ranges of 0.06 (0.69 to 0.75), 0.06 (0.69 to 0.75), and 0.08 (0.69 to 0.77), respectively. Radiomic feature importance remained similar.</p><p><strong>Conclusions: </strong>Variability in the definition of BM progression has a measurable impact on the performance of an MRI radiomic-based ML model predicting post-SRS progression. A consistent, clinically relevant definition of post-SRS progression across studies would enable robust comparison of proposed ML systems, thereby accelerating progress in this field.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024504"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11978467/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144004595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dallas K Tada, Grace H Kim, Jonathan G Goldin, Pangyu Teng, Kalyani Vyapari, Ashley Banola, Fereidoun Abtin, Michael McNitt-Gray, Matthew S Brown
{"title":"Using a fully automated, quantitative fissure integrity score extracted from chest CT scans of emphysema patients to predict endobronchial valve response.","authors":"Dallas K Tada, Grace H Kim, Jonathan G Goldin, Pangyu Teng, Kalyani Vyapari, Ashley Banola, Fereidoun Abtin, Michael McNitt-Gray, Matthew S Brown","doi":"10.1117/1.JMI.12.2.024501","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.024501","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to develop and validate a prediction model using a previously developed fully automated quantitative fissure integrity score (FIS) extracted from pre-treatment CT images to identify suitable candidates for endobronchial valve (EBV) treatment.</p><p><strong>Approach: </strong>We retrospectively collected 96 anonymized pre- and post-treatment chest computed tomography (CT) exams from patients with moderate to severe emphysema and who underwent EBV treatment. We used a previously developed fully automated, deep learning-based approach to quantitatively assess the completeness of each fissure by obtaining the FIS for each fissure from each patient's pre-treatment CT exam. The response to EBV treatment was recorded as the amount of targeted lobe volume reduction (TLVR) compared with target lobe volume prior to treatment as assessed on the pre- and post-treatment CT scans. EBV placement was considered successful with a TLVR of <math><mrow><mo>≥</mo> <mn>350</mn> <mtext> </mtext> <mi>cc</mi></mrow> </math> . The dataset was split into a training set ( <math><mrow><mi>N</mi> <mo>=</mo> <mn>58</mn></mrow> </math> ) and a test set ( <math><mrow><mi>N</mi> <mo>=</mo> <mn>38</mn></mrow> </math> ) to train and validate a logistic regression model using fivefold cross-validation; the extracted FIS of each patient's targeted treatment lobe was the primary CT predictor. Using the training set, a receiver operating characteristic (ROC) curve analysis and predictive values were quantified over a range of FIS thresholds to determine an optimal cutoff value that would distinguish complete and incomplete fissures, which was used to evaluate predictive values of the test set cases.</p><p><strong>Results: </strong>ROC analysis of the training set provided an AUC of 0.83, and the determined FIS threshold was 89.5%. Using this threshold on the test set achieved an accuracy of 81.6%, specificity (Sp) of 90.9%, sensitivity (Sn) of 77.8%, positive predictive value (PPV) of 62.5%, and negative predictive value of 95.5%.</p><p><strong>Conclusions: </strong>A model using the quantified FIS shows potential as a predictive biomarker for whether a targeted lobe will achieve successful volume reduction from EBV treatment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024501"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}