Sen Wang, Yirong Yang, Debashish Pal, Zhye Yin, Jonathan S Maltz, Norbert J Pelc, Adam S Wang
{"title":"Spectral optimization using fast kV switching and filtration for photon counting CT with realistic detector responses: a simulation study.","authors":"Sen Wang, Yirong Yang, Debashish Pal, Zhye Yin, Jonathan S Maltz, Norbert J Pelc, Adam S Wang","doi":"10.1117/1.JMI.11.S1.S12805","DOIUrl":"10.1117/1.JMI.11.S1.S12805","url":null,"abstract":"<p><strong>Purpose: </strong>Photon counting CT (PCCT) provides spectral measurements for material decomposition. However, the image noise (at a fixed dose) depends on the source spectrum. Our study investigates the potential benefits from spectral optimization using fast kV switching and filtration to reduce noise in material decomposition.</p><p><strong>Approach: </strong>The effect of the input spectra on noise performance in both two-basis material decomposition and three-basis material decomposition was compared using Cramer-Rao lower bound analysis in the projection domain and in a digital phantom study in the image domain. The fluences of different spectra were normalized using the CT dose index to maintain constant dose levels. Four detector response models based on Si or CdTe were included in the analysis.</p><p><strong>Results: </strong>For single kV scans, kV selection can be optimized based on the imaging task and object size. Furthermore, our results suggest that noise in material decomposition can be substantially reduced with fast kV switching. For two-material decomposition, fast kV switching reduces the standard deviation (SD) by <math><mrow><mo>∼</mo> <mn>10</mn> <mo>%</mo></mrow> </math> . For three-material decomposition, greater noise reduction in material images was found with fast kV switching (26.2% for calcium and 25.8% for iodine, in terms of SD), which suggests that challenging tasks benefit more from the richer spectral information provided by fast kV switching.</p><p><strong>Conclusions: </strong>The performance of PCCT in material decomposition can be improved by optimizing source spectrum settings. Task-specific tube voltages can be selected for single kV scans. Also, our results demonstrate that utilizing fast kV switching can substantially reduce the noise in material decomposition for both two- and three-material decompositions, and a fixed Gd filter can further enhance such improvements for two-material decomposition.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12805"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11272100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141789484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Number of energy windows for photon counting detectors: is more actually more?","authors":"Katsuyuki Taguchi","doi":"10.1117/1.JMI.11.S1.S12807","DOIUrl":"10.1117/1.JMI.11.S1.S12807","url":null,"abstract":"<p><strong>Purpose: </strong>It has been debated whether photon counting detectors (PCDs) with moderate numbers of energy windows ( <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> ) perform better than PCDs with higher <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> . A higher <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> results in fewer photons in each energy window, which degrades the signal-to-noise ratio of each datum. Unlike energy-integrating detectors, PCDs add very little electronic noise to measured counts; however, there exists electronic noise on the pulse train, to which multiple energy thresholds are applied to count photons. The noise may increase the uncertainty of counts within energy windows; however, this effect has not been studied in the context of spectral imaging tasks. We aim to investigate the effect of <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> on the quality of the spectral information in the presence of electronic noise.</p><p><strong>Approach: </strong>We obtained the following three types of PCD data with various <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> (= 2 to 24) and noise levels using a Monte Carlo simulation: (A) A PCD with no electronic noise; (B) realistic PCDs with electronic noise added to the pulse train; and (C) hypothetical PCDs with electronic noise added to each energy window's output, similar to energy-integrating detectors. We evaluated the Cramér-Rao lower bound (CRLB) of estimation for the following two spectral imaging tasks: (a) water-bone material decomposition and (b) K-edge imaging.</p><p><strong>Results: </strong>For both the e-noise-free and realistic PCDs, the CRLB improved monotonically with increasing <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> for both tasks. In contrast, a moderate <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> provided the best CRLB for the hypothetical PCDs, and the optimal <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> was smaller when electronic noise was larger. Adding one energy window to the minimum necessary <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> for a given task gained 66.2% to 68.7% of the improvement <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> <mo>=</mo> <mn>24</mn></mrow> </math> provided.</p><p><strong>Conclusion: </strong>For realistic PCDs, the quality of the spectral information monotonically improves with increasing <math> <mrow><msub><mi>N</mi> <mi>E</mi></msub> </mrow> </math> .</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12807"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11413649/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karin Larsson, Dennis Hein, Ruihan Huang, Daniel Collin, Andrea Scotti, Erik Fredenberg, Jonas Andersson, Mats Persson
{"title":"Deep learning estimation of proton stopping power with photon-counting computed tomography: a virtual study.","authors":"Karin Larsson, Dennis Hein, Ruihan Huang, Daniel Collin, Andrea Scotti, Erik Fredenberg, Jonas Andersson, Mats Persson","doi":"10.1117/1.JMI.11.S1.S12809","DOIUrl":"10.1117/1.JMI.11.S1.S12809","url":null,"abstract":"<p><strong>Purpose: </strong>Proton radiation therapy may achieve precise dose delivery to the tumor while sparing non-cancerous surrounding tissue, owing to the distinct Bragg peaks of protons. Aligning the high-dose region with the tumor requires accurate estimates of the proton stopping power ratio (SPR) of patient tissues, commonly derived from computed tomography (CT) image data. Photon-counting detectors for CT have demonstrated advantages over their energy-integrating counterparts, such as improved quantitative imaging, higher spatial resolution, and filtering of electronic noise. We assessed the potential of photon-counting computed tomography (PCCT) for improving SPR estimation by training a deep neural network on a domain transform from PCCT images to SPR maps.</p><p><strong>Approach: </strong>The XCAT phantom was used to simulate PCCT images of the head with CatSim, as well as to compute corresponding ground truth SPR maps. The tube current was set to 260 mA, tube voltage to 120 kV, and number of view angles to 4000. The CT images and SPR maps were used as input and labels for training a U-Net.</p><p><strong>Results: </strong>Prediction of SPR with the network yielded average root mean square errors (RMSE) of 0.26% to 0.41%, which was an improvement on the RMSE for methods based on physical modeling developed for single-energy CT at 0.40% to 1.30% and dual-energy CT at 0.41% to 3.00%, performed on the simulated PCCT data.</p><p><strong>Conclusions: </strong>These early results show promise for using a combination of PCCT and deep learning for estimating SPR, which in extension demonstrates potential for reducing the beam range uncertainty in proton therapy.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12809"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11576576/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hadley DeBrosse, Giavanna Jadick, Ling Jian Meng, Patrick La Rivière
{"title":"Contrast-to-noise ratio comparison between X-ray fluorescence emission tomography and computed tomography.","authors":"Hadley DeBrosse, Giavanna Jadick, Ling Jian Meng, Patrick La Rivière","doi":"10.1117/1.JMI.11.S1.S12808","DOIUrl":"https://doi.org/10.1117/1.JMI.11.S1.S12808","url":null,"abstract":"<p><strong>Purpose: </strong>We provide a comparison of X-ray fluorescence emission tomography (XFET) and computed tomography (CT) for detecting low concentrations of gold nanoparticles (GNPs) in soft tissue and characterize the conditions under which XFET outperforms energy-integrating CT (EICT) and photon-counting CT (PCCT).</p><p><strong>Approach: </strong>We compared dose-matched Monte Carlo XFET simulations and analytical fan-beam EICT and PCCT simulations. Each modality was used to image a numerical mouse phantom and contrast-depth phantom containing GNPs ranging from 0.05% to 4% by weight in soft tissue. Contrast-to-noise ratios (CNRs) of gold regions were compared among the three modalities, and XFET's detection limit was quantified based on the Rose criterion. A partial field-of-view (FOV) image was acquired for the phantom region containing 0.05% GNPs.</p><p><strong>Results: </strong>For the mouse phantom, XFET produced superior CNR values ( <math><mrow><mi>CNRs</mi> <mo>=</mo> <mn>24.5</mn></mrow> </math> , 21.6, and 3.4) compared with CT images obtained with both energy-integrating ( <math><mrow><mi>CNR</mi> <mo>=</mo> <mn>4.4</mn></mrow> </math> , 4.6, and 1.5) and photon-counting ( <math><mrow><mi>CNR</mi> <mo>=</mo> <mn>6.5</mn></mrow> </math> , 7.7, and 2.0) detection systems. More generally, XFET outperformed CT for superficial imaging depths ( <math><mrow><mo><</mo> <mn>28.75</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> ) for gold concentrations at and above 0.5%. XFET's surface detection limit was quantified as 0.44% for an average phantom dose of 16 mGy compatible with <i>in vivo</i> imaging. XFET's ability to image partial FOVs was demonstrated, and 0.05% gold was easily detected with an estimated dose of <math><mrow><mo>∼</mo> <mn>81.6</mn> <mtext> </mtext> <mi>cGy</mi></mrow> </math> to a localized region of interest.</p><p><strong>Conclusions: </strong>We demonstrate a proof of XFET's benefit for imaging low concentrations of gold at superficial depths and the feasibility of XFET for <i>in vivo</i> metal mapping in preclinical imaging tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12808"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11478016/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications.","authors":"Amara Tariq, Gokul Ramasamy, Bhavik Patel, Imon Banerjee","doi":"10.1117/1.JMI.11.6.064003","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064003","url":null,"abstract":"<p><strong>Purpose: </strong>Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications.</p><p><strong>Approach: </strong>We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams.</p><p><strong>Results: </strong>The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as <math><mrow><mo>∼</mo> <mn>380</mn> <mtext> </mtext> <mi>K</mi></mrow> </math> with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights.</p><p><strong>Conclusion: </strong>We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064003"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11550486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman
{"title":"Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology.","authors":"Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.11.6.067501","DOIUrl":"10.1117/1.JMI.11.6.067501","url":null,"abstract":"<p><strong>Purpose: </strong>Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E.</p><p><strong>Approach: </strong>We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E.</p><p><strong>Results: </strong>On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of <math><mrow><mn>0.34</mn> <mo>±</mo> <mn>0.15</mn></mrow> </math> (prevalence <math><mrow><mn>0.03</mn> <mo>±</mo> <mn>0.01</mn></mrow> </math> ) and <math><mrow><mn>0.47</mn> <mo>±</mo> <mn>0.1</mn></mrow> </math> (prevalence <math><mrow><mn>0.07</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of <math><mrow><mn>0.43</mn> <mo>±</mo> <mn>0.03</mn></mrow> </math> (parent class prevalence 0.21) and <math><mrow><mn>0.94</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> (parent class prevalence 0.49) when using ground truth centroid information.</p><p><strong>Conclusions: </strong>This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537205/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince
{"title":"Vector field attention for deformable image registration.","authors":"Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince","doi":"10.1117/1.JMI.11.6.064001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields through convolutional or fully connected layers from these high-dimensional feature maps. We present vector field attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences.</p><p><strong>Approach: </strong>VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need for learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner.</p><p><strong>Results: </strong>We evaluated VFA for intra- and inter-modality registration and unsupervised and semi-supervised registration using public datasets as well as the Learn2Reg challenge. VFA demonstrated comparable or superior registration accuracy compared with several state-of-the-art methods.</p><p><strong>Conclusions: </strong>VFA offers a novel approach to deformable image registration by directly retrieving spatial correspondences from feature maps, leading to improved performance in registration tasks. It holds potential for broader applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064001"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142606811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John W Garrett, Kelly Capel, Laura Eisenmenger, Azam Ahmed, David Niemann, Yinsheng Li, Ke Li, Dalton Griner, Sebastian Schafer, Charles Strother, Guang-Hong Chen, Beverly Aagaard-Kienitz
{"title":"Comparison of sequential multi-detector CT and cone-beam CT perfusion maps in 39 subjects with anterior circulation acute ischemic stroke due to a large vessel occlusion.","authors":"John W Garrett, Kelly Capel, Laura Eisenmenger, Azam Ahmed, David Niemann, Yinsheng Li, Ke Li, Dalton Griner, Sebastian Schafer, Charles Strother, Guang-Hong Chen, Beverly Aagaard-Kienitz","doi":"10.1117/1.JMI.11.6.065502","DOIUrl":"10.1117/1.JMI.11.6.065502","url":null,"abstract":"<p><strong>Purpose: </strong>The critical time between stroke onset and treatment was targeted for reduction by integrating physiological imaging into the angiography suite, potentially improving clinical outcomes. The evaluation was conducted to compare C-Arm cone beam CT perfusion (CBCTP) with multi-detector CT perfusion (MDCTP) in patients with acute ischemic stroke (AIS).</p><p><strong>Approach: </strong>Thirty-nine patients with anterior circulation AIS underwent both MDCTP and CBCTP. Imaging results were compared using an in-house algorithm for CBCTP map generation and RAPID for post-processing. Blinded neuroradiologists assessed images for quality, diagnostic utility, and treatment decision support, with non-inferiority analysis (two one-sided tests for equivalence) and inter-reviewer consistency (Cohen's kappa).</p><p><strong>Results: </strong>The mean time from MDCTP to angiography suite arrival was <math><mrow><mn>50</mn> <mo>±</mo> <mn>34</mn> <mtext> </mtext> <mi>min</mi></mrow> </math> , and that from arrival to the first CBCTP image was <math><mrow><mn>21</mn> <mo>±</mo> <mn>8</mn> <mtext> </mtext> <mi>min</mi></mrow> </math> . Stroke diagnosis accuracies were 96% [93%, 97%] with MDCTP and 91% [90%, 93%] with CBCTP. Cohen's kappa between observers was 0.86 for MDCTP and 0.90 for CBCTP, showing excellent inter-reader consistency. CBCTP's scores for diagnostic utility, mismatch pattern detection, and treatment decisions were noninferior to MDCTP scores (alpha = 0.05) within 20% of the range. MDCTP was slightly superior for image quality and artifact score (1.8 versus 2.3, <math><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>In this small paper, CBCTP was noninferior to MDCTP, potentially saving nearly an hour per patient if they went directly to the angiography suite upon hospital arrival.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"065502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11614149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taylor Kantor, Prashant Mahajan, Sarah Murthi, Candice Stegink, Barbara Brawn, Amitabh Varshney, Rishindra M Reddy
{"title":"Role of eXtended Reality use in medical imaging interpretation for pre-surgical planning and intraoperative augmentation.","authors":"Taylor Kantor, Prashant Mahajan, Sarah Murthi, Candice Stegink, Barbara Brawn, Amitabh Varshney, Rishindra M Reddy","doi":"10.1117/1.JMI.11.6.062607","DOIUrl":"10.1117/1.JMI.11.6.062607","url":null,"abstract":"<p><strong>Purpose: </strong>eXtended Reality (XR) technology, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), is a growing field in healthcare. Each modality offers unique benefits and drawbacks for medical education, simulation, and clinical care. We review current studies to understand how XR technology uses medical imaging to enhance surgical diagnostics, planning, and performance. We also highlight current limitations and future directions.</p><p><strong>Approach: </strong>We reviewed the literature on immersive XR technologies for surgical planning and intraoperative augmentation, excluding studies on telemedicine and 2D video-based training. We cited publications highlighting XR's advantages and limitations in these categories.</p><p><strong>Results: </strong>A review of 556 papers on XR for medical imaging in surgery yielded 155 relevant papers reviewed utilizing the aid of chatGPT. XR technology may improve procedural times, reduce errors, and enhance surgical workflows. It aids in preoperative planning, surgical navigation, and real-time data integration, improving surgeon ergonomics and enabling remote collaboration. However, adoption faces challenges such as high costs, infrastructure needs, and regulatory hurdles. Despite these, XR shows significant potential in advancing surgical care.</p><p><strong>Conclusions: </strong>Immersive technologies in healthcare enhance visualization and understanding of medical conditions, promising better patient outcomes and innovative treatments but face adoption challenges such as cost, technological constraints, and regulatory hurdles. Addressing these requires strategic collaborations and improvements in image quality, hardware, integration, and training.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062607"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11618384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mena Shenouda, Abbas Shaikh, Ilana Deutsch, Owen Mitchell, Hedy L Kindler, Samuel G Armato
{"title":"Radiomics for differentiation of somatic <i>BAP1</i> mutation on CT scans of patients with pleural mesothelioma.","authors":"Mena Shenouda, Abbas Shaikh, Ilana Deutsch, Owen Mitchell, Hedy L Kindler, Samuel G Armato","doi":"10.1117/1.JMI.11.6.064501","DOIUrl":"10.1117/1.JMI.11.6.064501","url":null,"abstract":"<p><strong>Purpose: </strong>The BRCA1-associated protein 1 (<i>BAP1</i>) gene is of great interest because somatic (<i>BAP1</i>) mutations are the most common alteration associated with pleural mesothelioma (PM). Further, germline mutation of the <i>BAP1</i> gene has been linked to the development of PM. This study aimed to explore the potential of radiomics on computed tomography scans to identify somatic <i>BAP1</i> gene mutations and assess the feasibility of radiomics in future research in identifying germline mutations.</p><p><strong>Approach: </strong>A cohort of 149 patients with PM and known somatic <i>BAP1</i> mutation status was collected, and a previously published deep learning model was used to first automatically segment the tumor, followed by radiologist modifications. Image preprocessing was performed, and texture features were extracted from the segmented tumor regions. The top features were selected and used to train 18 separate machine learning models using leave-one-out cross-validation (LOOCV). The performance of the models in distinguishing between <i>BAP1</i>-mutated (<i>BAP1+</i>) and <i>BAP1</i> wild-type (<i>BAP1-</i>) tumors was evaluated using the receiver operating characteristic area under the curve (ROC AUC).</p><p><strong>Results: </strong>A decision tree classifier achieved the highest overall AUC value of 0.69 (95% confidence interval: 0.60 and 0.77). The features selected most frequently through the LOOCV were all second-order (gray-level co-occurrence or gray-level size zone matrices) and were extracted from images with an applied transformation.</p><p><strong>Conclusions: </strong>This proof-of-concept work demonstrated the potential of radiomics to differentiate among <i>BAP1+/-</i> in patients with PM. Future work will extend these methods to the assessment of germline <i>BAP1</i> mutation status through image analysis for improved patient prognostication.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}