Journal of Medical Imaging最新文献

筛选
英文 中文
Contrast-to-noise ratio comparison between X-ray fluorescence emission tomography and computed tomography. X 射线荧光发射断层扫描与计算机断层扫描的对比度与噪声比。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-12-01 Epub Date: 2024-10-15 DOI: 10.1117/1.JMI.11.S1.S12808
Hadley DeBrosse, Giavanna Jadick, Ling Jian Meng, Patrick La Rivière
{"title":"Contrast-to-noise ratio comparison between X-ray fluorescence emission tomography and computed tomography.","authors":"Hadley DeBrosse, Giavanna Jadick, Ling Jian Meng, Patrick La Rivière","doi":"10.1117/1.JMI.11.S1.S12808","DOIUrl":"https://doi.org/10.1117/1.JMI.11.S1.S12808","url":null,"abstract":"<p><strong>Purpose: </strong>We provide a comparison of X-ray fluorescence emission tomography (XFET) and computed tomography (CT) for detecting low concentrations of gold nanoparticles (GNPs) in soft tissue and characterize the conditions under which XFET outperforms energy-integrating CT (EICT) and photon-counting CT (PCCT).</p><p><strong>Approach: </strong>We compared dose-matched Monte Carlo XFET simulations and analytical fan-beam EICT and PCCT simulations. Each modality was used to image a numerical mouse phantom and contrast-depth phantom containing GNPs ranging from 0.05% to 4% by weight in soft tissue. Contrast-to-noise ratios (CNRs) of gold regions were compared among the three modalities, and XFET's detection limit was quantified based on the Rose criterion. A partial field-of-view (FOV) image was acquired for the phantom region containing 0.05% GNPs.</p><p><strong>Results: </strong>For the mouse phantom, XFET produced superior CNR values ( <math><mrow><mi>CNRs</mi> <mo>=</mo> <mn>24.5</mn></mrow> </math> , 21.6, and 3.4) compared with CT images obtained with both energy-integrating ( <math><mrow><mi>CNR</mi> <mo>=</mo> <mn>4.4</mn></mrow> </math> , 4.6, and 1.5) and photon-counting ( <math><mrow><mi>CNR</mi> <mo>=</mo> <mn>6.5</mn></mrow> </math> , 7.7, and 2.0) detection systems. More generally, XFET outperformed CT for superficial imaging depths ( <math><mrow><mo><</mo> <mn>28.75</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> ) for gold concentrations at and above 0.5%. XFET's surface detection limit was quantified as 0.44% for an average phantom dose of 16 mGy compatible with <i>in vivo</i> imaging. XFET's ability to image partial FOVs was demonstrated, and 0.05% gold was easily detected with an estimated dose of <math><mrow><mo>∼</mo> <mn>81.6</mn> <mtext>  </mtext> <mi>cGy</mi></mrow> </math> to a localized region of interest.</p><p><strong>Conclusions: </strong>We demonstrate a proof of XFET's benefit for imaging low concentrations of gold at superficial depths and the feasibility of XFET for <i>in vivo</i> metal mapping in preclinical imaging tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12808"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11478016/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iterative clustering material decomposition aided by empirical spectral correction for photon counting detectors in micro-CT. 基于经验光谱校正的微ct光子计数探测器的迭代聚类材料分解。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-12-01 Epub Date: 2024-12-27 DOI: 10.1117/1.JMI.11.S1.S12810
J Carlos Rodriguez Luna, Mini Das
{"title":"Iterative clustering material decomposition aided by empirical spectral correction for photon counting detectors in micro-CT.","authors":"J Carlos Rodriguez Luna, Mini Das","doi":"10.1117/1.JMI.11.S1.S12810","DOIUrl":"10.1117/1.JMI.11.S1.S12810","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Photon counting detectors offer promising advancements in computed tomography (CT) imaging by enabling the quantification and three-dimensional imaging of contrast agents and tissue types through simultaneous multi-energy projections from broad X-ray spectra. However, the accuracy of these decomposition methods hinges on precise composite spectral attenuation values that one must reconstruct from spectral micro-CT. Errors in such estimations could be due to effects such as beam hardening, object scatter, or detector sensor-related spectral distortions such as fluorescence. Even if accurate spectral correction is done, multi-material separation within a volume remains a challenge. Increasing the number of energy bins in material decomposition problems often comes with a significant noise penalty but with minimal decomposition benefits.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;We begin with an empirical spectral correction method executed in the tomographic domain that accounts for distortions in estimated spectral attenuation for each voxel. This is followed by our proposed iterative clustering material decomposition (ICMD) where clustering of voxels is used to reduce the number of basis materials to be resolved for each cluster. Using a larger number of energy bins for the clustering step shows distinct advantages in excellent classification to a larger number of clusters with accurate cluster centers when compared with the National Institute of Standards and Technology attenuation values. The decomposition step is applied to each cluster separately where each cluster has fewer basis materials compared with the entire volume. This is shown to reduce the need for the number of energy bins required in each decomposition step for the clusters. This approach significantly increases the total number of materials that can be decomposed within the volume with high accuracy and with excellent noise properties.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Utilizing a (cadmium telluride 1-mm-thick sensor) Medipix detector with a &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;55&lt;/mn&gt; &lt;mtext&gt;-&lt;/mtext&gt; &lt;mi&gt;μ&lt;/mi&gt; &lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt; &lt;/math&gt; pitch, we demonstrate the quantitatively accurate decomposition of several materials in a phantom study, where the sample includes mixtures of soft materials such as water and poly-methyl methacrylate along with contrast-enhancing materials. We show improved accuracy and lower noise when all five energy bins were used to yield effective classification of voxels into multiple accurate fundamental clusters which was followed by the decomposition step applied to each cluster using just two energy bins. We also show an example of biological sample imaging and separating three distinct types of tissue in mice: muscle, fat, and bone. Our experimental results show that the combination of effective and practical spectral correction and high-dimensional data clustering enhances decomposition accuracy and reduces noise in micro-CT.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12810"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications. 胸部计算机断层扫描的自我监督学习:训练策略及对下游应用的影响。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-11-09 DOI: 10.1117/1.JMI.11.6.064003
Amara Tariq, Gokul Ramasamy, Bhavik Patel, Imon Banerjee
{"title":"Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications.","authors":"Amara Tariq, Gokul Ramasamy, Bhavik Patel, Imon Banerjee","doi":"10.1117/1.JMI.11.6.064003","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064003","url":null,"abstract":"<p><strong>Purpose: </strong>Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications.</p><p><strong>Approach: </strong>We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams.</p><p><strong>Results: </strong>The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as <math><mrow><mo>∼</mo> <mn>380</mn> <mtext>  </mtext> <mi>K</mi></mrow> </math> with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights.</p><p><strong>Conclusion: </strong>We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064003"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11550486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology. 使用样式转移数字病理学对结肠苏木精和伊红进行数据驱动的细胞核亚分类。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-11-05 DOI: 10.1117/1.JMI.11.6.067501
Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman
{"title":"Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology.","authors":"Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.11.6.067501","DOIUrl":"10.1117/1.JMI.11.6.067501","url":null,"abstract":"<p><strong>Purpose: </strong>Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E.</p><p><strong>Approach: </strong>We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E.</p><p><strong>Results: </strong>On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of <math><mrow><mn>0.34</mn> <mo>±</mo> <mn>0.15</mn></mrow> </math> (prevalence <math><mrow><mn>0.03</mn> <mo>±</mo> <mn>0.01</mn></mrow> </math> ) and <math><mrow><mn>0.47</mn> <mo>±</mo> <mn>0.1</mn></mrow> </math> (prevalence <math><mrow><mn>0.07</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of <math><mrow><mn>0.43</mn> <mo>±</mo> <mn>0.03</mn></mrow> </math> (parent class prevalence 0.21) and <math><mrow><mn>0.94</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> (parent class prevalence 0.49) when using ground truth centroid information.</p><p><strong>Conclusions: </strong>This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537205/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vector field attention for deformable image registration. 用于可变形图像配准的矢量场关注。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-11-06 DOI: 10.1117/1.JMI.11.6.064001
Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince
{"title":"Vector field attention for deformable image registration.","authors":"Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince","doi":"10.1117/1.JMI.11.6.064001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields through convolutional or fully connected layers from these high-dimensional feature maps. We present vector field attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences.</p><p><strong>Approach: </strong>VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need for learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner.</p><p><strong>Results: </strong>We evaluated VFA for intra- and inter-modality registration and unsupervised and semi-supervised registration using public datasets as well as the Learn2Reg challenge. VFA demonstrated comparable or superior registration accuracy compared with several state-of-the-art methods.</p><p><strong>Conclusions: </strong>VFA offers a novel approach to deformable image registration by directly retrieving spatial correspondences from feature maps, leading to improved performance in registration tasks. It holds potential for broader applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064001"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142606811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Role of eXtended Reality use in medical imaging interpretation for pre-surgical planning and intraoperative augmentation. 扩展现实技术在术前计划和术中增强的医学影像解释中的作用。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-05 DOI: 10.1117/1.JMI.11.6.062607
Taylor Kantor, Prashant Mahajan, Sarah Murthi, Candice Stegink, Barbara Brawn, Amitabh Varshney, Rishindra M Reddy
{"title":"Role of eXtended Reality use in medical imaging interpretation for pre-surgical planning and intraoperative augmentation.","authors":"Taylor Kantor, Prashant Mahajan, Sarah Murthi, Candice Stegink, Barbara Brawn, Amitabh Varshney, Rishindra M Reddy","doi":"10.1117/1.JMI.11.6.062607","DOIUrl":"10.1117/1.JMI.11.6.062607","url":null,"abstract":"<p><strong>Purpose: </strong>eXtended Reality (XR) technology, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), is a growing field in healthcare. Each modality offers unique benefits and drawbacks for medical education, simulation, and clinical care. We review current studies to understand how XR technology uses medical imaging to enhance surgical diagnostics, planning, and performance. We also highlight current limitations and future directions.</p><p><strong>Approach: </strong>We reviewed the literature on immersive XR technologies for surgical planning and intraoperative augmentation, excluding studies on telemedicine and 2D video-based training. We cited publications highlighting XR's advantages and limitations in these categories.</p><p><strong>Results: </strong>A review of 556 papers on XR for medical imaging in surgery yielded 155 relevant papers reviewed utilizing the aid of chatGPT. XR technology may improve procedural times, reduce errors, and enhance surgical workflows. It aids in preoperative planning, surgical navigation, and real-time data integration, improving surgeon ergonomics and enabling remote collaboration. However, adoption faces challenges such as high costs, infrastructure needs, and regulatory hurdles. Despite these, XR shows significant potential in advancing surgical care.</p><p><strong>Conclusions: </strong>Immersive technologies in healthcare enhance visualization and understanding of medical conditions, promising better patient outcomes and innovative treatments but face adoption challenges such as cost, technological constraints, and regulatory hurdles. Addressing these requires strategic collaborations and improvements in image quality, hardware, integration, and training.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062607"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11618384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of sequential multi-detector CT and cone-beam CT perfusion maps in 39 subjects with anterior circulation acute ischemic stroke due to a large vessel occlusion. 39例大血管闭塞所致前循环急性缺血性脑卒中患者序列CT与锥束CT灌注图的比较
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-03 DOI: 10.1117/1.JMI.11.6.065502
John W Garrett, Kelly Capel, Laura Eisenmenger, Azam Ahmed, David Niemann, Yinsheng Li, Ke Li, Dalton Griner, Sebastian Schafer, Charles Strother, Guang-Hong Chen, Beverly Aagaard-Kienitz
{"title":"Comparison of sequential multi-detector CT and cone-beam CT perfusion maps in 39 subjects with anterior circulation acute ischemic stroke due to a large vessel occlusion.","authors":"John W Garrett, Kelly Capel, Laura Eisenmenger, Azam Ahmed, David Niemann, Yinsheng Li, Ke Li, Dalton Griner, Sebastian Schafer, Charles Strother, Guang-Hong Chen, Beverly Aagaard-Kienitz","doi":"10.1117/1.JMI.11.6.065502","DOIUrl":"10.1117/1.JMI.11.6.065502","url":null,"abstract":"<p><strong>Purpose: </strong>The critical time between stroke onset and treatment was targeted for reduction by integrating physiological imaging into the angiography suite, potentially improving clinical outcomes. The evaluation was conducted to compare C-Arm cone beam CT perfusion (CBCTP) with multi-detector CT perfusion (MDCTP) in patients with acute ischemic stroke (AIS).</p><p><strong>Approach: </strong>Thirty-nine patients with anterior circulation AIS underwent both MDCTP and CBCTP. Imaging results were compared using an in-house algorithm for CBCTP map generation and RAPID for post-processing. Blinded neuroradiologists assessed images for quality, diagnostic utility, and treatment decision support, with non-inferiority analysis (two one-sided tests for equivalence) and inter-reviewer consistency (Cohen's kappa).</p><p><strong>Results: </strong>The mean time from MDCTP to angiography suite arrival was <math><mrow><mn>50</mn> <mo>±</mo> <mn>34</mn> <mtext>  </mtext> <mi>min</mi></mrow> </math> , and that from arrival to the first CBCTP image was <math><mrow><mn>21</mn> <mo>±</mo> <mn>8</mn> <mtext>  </mtext> <mi>min</mi></mrow> </math> . Stroke diagnosis accuracies were 96% [93%, 97%] with MDCTP and 91% [90%, 93%] with CBCTP. Cohen's kappa between observers was 0.86 for MDCTP and 0.90 for CBCTP, showing excellent inter-reader consistency. CBCTP's scores for diagnostic utility, mismatch pattern detection, and treatment decisions were noninferior to MDCTP scores (alpha = 0.05) within 20% of the range. MDCTP was slightly superior for image quality and artifact score (1.8 versus 2.3, <math><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>In this small paper, CBCTP was noninferior to MDCTP, potentially saving nearly an hour per patient if they went directly to the angiography suite upon hospital arrival.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"065502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11614149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomics for differentiation of somatic BAP1 mutation on CT scans of patients with pleural mesothelioma. 放射组学用于区分胸膜间皮瘤患者 CT 扫描中的体细胞 BAP1 突变。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-11 DOI: 10.1117/1.JMI.11.6.064501
Mena Shenouda, Abbas Shaikh, Ilana Deutsch, Owen Mitchell, Hedy L Kindler, Samuel G Armato
{"title":"Radiomics for differentiation of somatic <i>BAP1</i> mutation on CT scans of patients with pleural mesothelioma.","authors":"Mena Shenouda, Abbas Shaikh, Ilana Deutsch, Owen Mitchell, Hedy L Kindler, Samuel G Armato","doi":"10.1117/1.JMI.11.6.064501","DOIUrl":"10.1117/1.JMI.11.6.064501","url":null,"abstract":"<p><strong>Purpose: </strong>The BRCA1-associated protein 1 (<i>BAP1</i>) gene is of great interest because somatic (<i>BAP1</i>) mutations are the most common alteration associated with pleural mesothelioma (PM). Further, germline mutation of the <i>BAP1</i> gene has been linked to the development of PM. This study aimed to explore the potential of radiomics on computed tomography scans to identify somatic <i>BAP1</i> gene mutations and assess the feasibility of radiomics in future research in identifying germline mutations.</p><p><strong>Approach: </strong>A cohort of 149 patients with PM and known somatic <i>BAP1</i> mutation status was collected, and a previously published deep learning model was used to first automatically segment the tumor, followed by radiologist modifications. Image preprocessing was performed, and texture features were extracted from the segmented tumor regions. The top features were selected and used to train 18 separate machine learning models using leave-one-out cross-validation (LOOCV). The performance of the models in distinguishing between <i>BAP1</i>-mutated (<i>BAP1+</i>) and <i>BAP1</i> wild-type (<i>BAP1-</i>) tumors was evaluated using the receiver operating characteristic area under the curve (ROC AUC).</p><p><strong>Results: </strong>A decision tree classifier achieved the highest overall AUC value of 0.69 (95% confidence interval: 0.60 and 0.77). The features selected most frequently through the LOOCV were all second-order (gray-level co-occurrence or gray-level size zone matrices) and were extracted from images with an applied transformation.</p><p><strong>Conclusions: </strong>This proof-of-concept work demonstrated the potential of radiomics to differentiate among <i>BAP1+/-</i> in patients with PM. Future work will extend these methods to the assessment of germline <i>BAP1</i> mutation status through image analysis for improved patient prognostication.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum: Publisher's Note: Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study. 勘误:出版者注:增强和虚拟现实成像用于心血管结构干预的协作规划:概念验证和验证研究。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-13 DOI: 10.1117/1.JMI.11.6.069801
Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene
{"title":"Erratum: Publisher's Note: Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study.","authors":"Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene","doi":"10.1117/1.JMI.11.6.069801","DOIUrl":"10.1117/1.JMI.11.6.069801","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1117/1.JMI.11.6.062606.].</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"069801"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo-spectral angle mapping for pixel and cell classification in highly multiplexed immunofluorescence images. 高复用免疫荧光图像中像素和细胞分类的伪光谱角映射。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-10 DOI: 10.1117/1.JMI.11.6.067502
Madeleine S Torcasso, Junting Ai, Gabriel Casella, Thao Cao, Anthony Chang, Ariel Halper-Stromberg, Bana Jabri, Marcus R Clark, Maryellen L Giger
{"title":"Pseudo-spectral angle mapping for pixel and cell classification in highly multiplexed immunofluorescence images.","authors":"Madeleine S Torcasso, Junting Ai, Gabriel Casella, Thao Cao, Anthony Chang, Ariel Halper-Stromberg, Bana Jabri, Marcus R Clark, Maryellen L Giger","doi":"10.1117/1.JMI.11.6.067502","DOIUrl":"10.1117/1.JMI.11.6.067502","url":null,"abstract":"<p><strong>Purpose: </strong>The rapid development of highly multiplexed microscopy has enabled the study of cells embedded within their native tissue. The rich spatial data provided by these techniques have yielded exciting insights into the spatial features of human disease. However, computational methods for analyzing these high-content images are still emerging; there is a need for more robust and generalizable tools for evaluating the cellular constituents and stroma captured by high-plex imaging. To address this need, we have adapted spectral angle mapping-an algorithm developed for hyperspectral image analysis-to compress the channel dimension of high-plex immunofluorescence (IF) images.</p><p><strong>Approach: </strong>Here, we present pseudo-spectral angle mapping (pSAM), a robust and flexible method for determining the most likely class of each pixel in a high-plex image. The class maps calculated through pSAM yield pixel classifications which can be combined with instance segmentation algorithms to classify individual cells.</p><p><strong>Results: </strong>In a dataset of colon biopsies imaged with a 13-plex staining panel, 16 pSAM class maps were computed to generate pixel classifications. Instance segmentations of cells with Cellpose2.0 ( <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of <math><mrow><mn>0.83</mn> <mo>±</mo> <mn>0.13</mn></mrow> </math> ) were combined with these class maps to provide cell class predictions for 13 cell classes. In addition, in a separate unseen dataset of kidney biopsies imaged with a 44-plex staining panel, pSAM plus Cellpose2.0 ( <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of <math><mrow><mn>0.86</mn> <mo>±</mo> <mn>0.11</mn></mrow> </math> ) detected a diverse set of 38 classes of structural and immune cells.</p><p><strong>Conclusions: </strong>In summary, pSAM is a powerful and generalizable tool for evaluating high-plex IF image data and classifying cells in these high-dimensional images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信