Samuel G Armato, Karen Drukker, Lubomir Hadjiiski, Carol C Wu, Jayashree Kalpathy-Cramer, George Shih, Maryellen L Giger, Natalie Baughan, Benjamin Bearce, Adam E Flanders, Robyn L Ball, Kyle J Myers, Heather M Whitney, The Midrc Grand Challenge Working Group
{"title":"MIDRC mRALE Mastermind Grand Challenge: AI to predict COVID severity on chest radiographs.","authors":"Samuel G Armato, Karen Drukker, Lubomir Hadjiiski, Carol C Wu, Jayashree Kalpathy-Cramer, George Shih, Maryellen L Giger, Natalie Baughan, Benjamin Bearce, Adam E Flanders, Robyn L Ball, Kyle J Myers, Heather M Whitney, The Midrc Grand Challenge Working Group","doi":"10.1117/1.JMI.12.2.024505","DOIUrl":"10.1117/1.JMI.12.2.024505","url":null,"abstract":"<p><strong>Purpose: </strong>The Medical Imaging and Data Resource Center (MIDRC) mRALE Mastermind Grand Challenge fostered the development of artificial intelligence (AI) techniques for the automated assignment of mRALE (modified radiographic assessment of lung edema) scores to portable chest radiographs from patients known to have COVID-19.</p><p><strong>Approach: </strong>The challenge utilized 2079 training cases obtained from the publicly available MIDRC data commons, with validation and test cases sampled from not-yet-public MIDRC cases that were inaccessible to challenge participants. The reference standard mRALE scores for the challenge cases were established by a pool of 22 radiologist annotators. Using the MedICI challenge platform, participants submitted their trained algorithms encapsulated in Docker containers. Algorithms were evaluated by the challenge organizers on 814 test cases through two performance assessment metrics: quadratic-weighted kappa and prediction probability concordance.</p><p><strong>Results: </strong>Nine AI algorithms were submitted to the challenge for assessment against the test set cases. The algorithm that demonstrated the highest agreement with the reference standard had a quadratic-weighted kappa of 0.885 and a prediction probability concordance of 0.875. Substantial variability in mRALE scores assigned by the annotators and output by the AI algorithms was observed.</p><p><strong>Conclusions: </strong>The MIDRC mRALE Mastermind Grand Challenge revealed the potential of AI to assess COVID-19 severity from portable CXRs, demonstrating promising performance against the reference standard. The observed variability in mRALE scores highlights the challenges in standardizing severity assessment. These findings contribute to ongoing efforts to develop AI technologies for potential use in clinical practice and offer insights for the enhancement of COVID-19 severity assessment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024505"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12014941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichuan Tang, Ashiqur Rahaman, Araceli B Gonzalez, Issac Abouaf, Aditya Malik, Igor Sorokin, Haichong Zhang
{"title":"Acoustic reflector-enabled forward-viewing ultrasound image-guided access.","authors":"Yichuan Tang, Ashiqur Rahaman, Araceli B Gonzalez, Issac Abouaf, Aditya Malik, Igor Sorokin, Haichong Zhang","doi":"10.1117/1.JMI.12.2.025002","DOIUrl":"10.1117/1.JMI.12.2.025002","url":null,"abstract":"<p><strong>Purpose: </strong>Existing ultrasound (US) image-guided needle access methods applied in various surgical procedures (such as percutaneous nephrolithotomy) face the challenge of keeping the needle tip visible during the insertion process due to the unguaranteed alignment between the US image and needle. We propose a needle insertion mechanism with reflector-integrated US imaging, where the US image plane and the needle are mechanically aligned, and the needle is inserted in a forward-viewing style to provide more intuitive access.</p><p><strong>Approach: </strong>An acoustic reflector is used to redirect the US image plane while the needle goes through the slit in the middle of the acoustic reflector so that the needle path aligns with the US image plane. Both the bracket holding the needle and the acoustic reflector are rotatable to provide clinicians with the flexibility to search for the optimal needle insertion orientation. Effects of the slit in the reflector on the quality of post-reflection ultrasound images were evaluated. Needle tip visibility was evaluated in water and <i>ex vivo</i> beef tissue. Needle access accuracy was evaluated using point targets embedded in gelatin, and errors between the needle tip and point targets are estimated from X-ray images.</p><p><strong>Results: </strong>The slit in the reflector has limited effects on post-reflection image quality. The needle tip was visible in water and in <i>ex vivo</i> tissue, and its visibility was quantified using a signal-to-noise ratio. Needle access results showed an average insertion error of less than 3 mm.</p><p><strong>Conclusions: </strong>Our results demonstrate the clinical potential of the reflector-enabled forward-viewing US image-guided access mechanism.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"025002"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11981581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucas W Remedios, Han Liu, Samuel W Remedios, Lianrui Zuo, Adam M Saunders, Shunxing Bao, Yuankai Huo, Alvin C Powers, John Virostko, Bennett A Landman
{"title":"Influence of early through late fusion on pancreas segmentation from imperfectly registered multimodal magnetic resonance imaging.","authors":"Lucas W Remedios, Han Liu, Samuel W Remedios, Lianrui Zuo, Adam M Saunders, Shunxing Bao, Yuankai Huo, Alvin C Powers, John Virostko, Bennett A Landman","doi":"10.1117/1.JMI.12.2.024008","DOIUrl":"10.1117/1.JMI.12.2.024008","url":null,"abstract":"<p><strong>Purpose: </strong>Combining different types of medical imaging data, through multimodal fusion, promises better segmentation of anatomical structures, such as the pancreas. Strategic implementation of multimodal fusion could improve our ability to study diseases such as diabetes. However, where to perform fusion in deep learning models is still an open question. It is unclear if there is a single best location to fuse information when analyzing pairs of imperfectly aligned images or if the optimal fusion location depends on the specific model being used. Two main challenges when using multiple imaging modalities to study the pancreas are (1) the pancreas and surrounding abdominal anatomy have a deformable structure, making it difficult to consistently align the images and (2) breathing by the individual during image collection further complicates the alignment between multimodal images. Even after using state-of-the-art deformable image registration techniques, specifically designed to align abdominal images, multimodal images of the abdomen are often not perfectly aligned. We examine how the choice of different fusion points, ranging from early in the image processing pipeline to later stages, impacts the segmentation of the pancreas on imperfectly registered multimodal magnetic resonance (MR) images.</p><p><strong>Approach: </strong>Our dataset consists of 353 pairs of T2-weighted (T2w) and T1-weighted (T1w) abdominal MR images from 163 subjects with accompanying pancreas segmentation labels drawn mainly based on the T2w images. Because the T2w images were acquired in an interleaved manner across two breath holds and the T1w images on one breath hold, there were three different breath holds impacting the alignment of each pair of images. We used deeds, a state-of-the-art deformable abdominal image registration method to align the image pairs. Then, we trained a collection of basic UNets with different fusion points, spanning from early to late layers in the model, to assess how early through late fusion influenced segmentation performance on imperfectly aligned images. To investigate whether performance differences on key fusion points are generalized to other architectures, we expanded our experiments to nnUNet.</p><p><strong>Results: </strong>The single-modality T2w baseline using a basic UNet model had a median Dice score of 0.766, whereas the same baseline on the nnUNet model achieved 0.824. For each fusion approach, we analyzed the differences in performance with Dice residuals, by subtracting the baseline score from the fusion score for each datapoint. For the basic UNet, the best fusion approach was from early/mid fusion and occurred in the middle of the encoder with a median Dice residual of <math><mrow><mo>+</mo> <mn>0.012</mn></mrow> </math> compared with the baseline. For the nnUNet, the best fusion approach was early fusion through naïve image concatenation before the model, with a median Dice residual of <math><mrow><mo>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024008"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12032765/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144050936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wennan Zhao, Trevor Kuhlengel, Qi Chang, Vahid Daneshpajooh, Yuxuan He, Austin Kao, Rebecca Bascom, Danish Ahmad, Yu Maw Htwe, Jennifer Toth, Thomas Schaer, Leslie Brewer, Rachel Hilliard, William E Higgins
{"title":"Two-phase radial endobronchial ultrasound bronchoscopy registration.","authors":"Wennan Zhao, Trevor Kuhlengel, Qi Chang, Vahid Daneshpajooh, Yuxuan He, Austin Kao, Rebecca Bascom, Danish Ahmad, Yu Maw Htwe, Jennifer Toth, Thomas Schaer, Leslie Brewer, Rachel Hilliard, William E Higgins","doi":"10.1117/1.JMI.12.2.025001","DOIUrl":"10.1117/1.JMI.12.2.025001","url":null,"abstract":"<p><strong>Purpose: </strong>Lung cancer remains the leading cause of cancer death. This has brought about a critical need for managing peripheral regions of interest (ROIs) in the lungs, be it for cancer diagnosis, staging, or treatment. The state-of-the-art approach for assessing peripheral ROIs involves bronchoscopy. To perform the procedure, the physician first navigates the bronchoscope to a preplanned airway, aided by an assisted bronchoscopy system. They then confirm an ROI's specific location and perform the requisite clinical task. Many ROIs, however, are extraluminal and invisible to the bronchoscope's field of view. For such ROIs, current practice dictates using a supplemental imaging method, such as fluoroscopy, cone-beam computed tomography (CT), or radial endobronchial ultrasound (R-EBUS), to gather additional ROI location information. Unfortunately, fluoroscopy and cone-beam CT require substantial radiation and lengthen procedure time. As an alternative, R-EBUS is a safer real-time option involving no radiation. Regrettably, existing assisted bronchoscopy systems offer no guidance for R-EBUS confirmation, forcing the physician to resort to an unguided guess-and-check approach for R-EBUS probe placement-an approach that can produce R-EBUS placement errors exceeding 30 deg, an error that can result in missing many ROIs. Thus, because of physician skill variations, biopsy success rates using R-EBUS for ROI confirmation have varied greatly from 31% to 80%. This situation obliges the physician to turn to a radiation-based modality to gather sufficient information for ROI confirmation. We propose a two-phase registration method that provides guidance for R-EBUS probe placement.</p><p><strong>Approach: </strong>After the physician navigates the bronchoscope to the airway near a target ROI, the two-phase registration method begins by registering a virtual bronchoscope to the real bronchoscope. A virtual 3D R-EBUS probe model is then registered to the real R-EBUS probe shape depicted in the bronchoscopic video using an iterative region-based alignment method drawing on a level-set-based optimization. This synchronizes the guidance system to the target ROI site. The physician can now perform the R-EBUS scan to confirm the ROI.</p><p><strong>Results: </strong>We validated the method's efficacy for localizing extraluminal ROIs with a series of three studies. First, for a controlled phantom study, we observed that the mean accumulated position and direction errors (accounting for both registration phases) were 1.94 mm and 3.74 deg (equivalent to 1.30 mm position error for a 20 mm biopsy needle), respectively. Next, for a live animal study, these errors were 2.81 mm and 4.79 deg (2.41 mm biopsy needle error), respectively. For 100% of the ROIs considered in these two studies, the method enabled visualization of an ROI via R-EBUS in under 3 min per ROI. Finally, initial operating-room tests on lung cancer patients indicated the method's efficacy,","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"025001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11889395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143587704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Braure, Delphine Lazaro, David Hateau, Vincent Brandon, Kévin Ginsburger
{"title":"Conditioning generative latent optimization for sparse-view computed tomography image reconstruction.","authors":"Thomas Braure, Delphine Lazaro, David Hateau, Vincent Brandon, Kévin Ginsburger","doi":"10.1117/1.JMI.12.2.024004","DOIUrl":"10.1117/1.JMI.12.2.024004","url":null,"abstract":"<p><strong>Purpose: </strong>The issue of delivered doses during computed tomography (CT) scans encouraged sparser sets of X-ray projection, severely degrading reconstructions from conventional methods. Although most deep learning approaches benefit from large supervised datasets, they cannot generalize to new acquisition protocols (geometry, source/detector specifications). To address this issue, we developed a method working without training data and independently of experimental setups. In addition, our model may be initialized on small unsupervised datasets to enhance reconstructions.</p><p><strong>Approach: </strong>We propose a conditioned generative latent optimization (cGLO) in which a decoder reconstructs multiple slices simultaneously with a shared objective. It is tested on full-dose sparse-view CT for varying projection sets: (a) without training data against Deep Image Prior (DIP) and (b) with training datasets of multiple sizes against state-of-the-art score-based generative models (SGMs). Peak signal-to-noise ratio (PSNR) and structural SIMilarity (SSIM) metrics are used to quantify reconstruction quality.</p><p><strong>Results: </strong>cGLO demonstrates better SSIM than SGMs (between <math><mrow><mo>+</mo> <mn>0.034</mn></mrow> </math> and <math><mrow><mo>+</mo> <mn>0.139</mn></mrow> </math> ) and has an increasing advantage for smaller datasets reaching a <math><mrow><mo>+</mo> <mn>6.06</mn> <mtext> </mtext> <mi>dB</mi></mrow> </math> PSNR gain. Our strategy also outperforms DIP with at least a <math><mrow><mo>+</mo> <mn>1.52</mn> <mtext> </mtext> <mi>dB</mi></mrow> </math> PSNR advantage and peaks at <math><mrow><mo>+</mo> <mn>3.15</mn> <mtext> </mtext> <mi>dB</mi></mrow> </math> with fewer angles. Moreover, cGLO does not create artifacts or structural deformations contrary to DIP and SGMs.</p><p><strong>Conclusions: </strong>We propose a parsimonious and robust reconstruction technique offering similar to better performances when compared with state-of-the-art methods regarding full-dose sparse-view CT. Our strategy could be readily applied to any imaging reconstruction task without any assumption about the acquisition protocol or the quantity of available data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024004"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11961077/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MD-SA2: optimizing Segment Anything 2 for multimodal, depth-aware brain tumor segmentation in sub-Saharan populations.","authors":"Benjamin Li, Kai Ding, Dimah Dera","doi":"10.1117/1.JMI.12.2.024007","DOIUrl":"10.1117/1.JMI.12.2.024007","url":null,"abstract":"<p><strong>Purpose: </strong>Machine learning algorithms are emerging as valuable aides for radiologists in medical image segmentation due to their accuracy and speed. However, existing approaches, including both conventional machine learning and Segment Anything (SA)-based models, face challenges with the complex, multimodal, and varied quality of magnetic resonance imaging (MRI) scan images used for brain tumor segmentation. To address these challenges, we propose MD-SA2, adapting Segment Anything 2 (SA2) to medical image segmentation and introducing a lightweight U-Net \"aggregator\" model.</p><p><strong>Approach: </strong>Various modifications are incorporated to enhance segmentation accuracy and throughput. SA2 is first customized and fine-tuned for greater efficiency than the original Segment Anything. Slices from multiple image modalities are concatenated for input into the image encoder to improve the delineation of tumor subtypes. In addition, a lightweight U-Net aggregator model is integrated with SA2 to introduce depth awareness. The 2023 BraTS-Africa dataset, containing low-resolution MRI images from 60 sub-Saharan patients, is used to evaluate the algorithm's performance.</p><p><strong>Results: </strong>MD-SA2 attains notable improvements over existing approaches under challenging data circumstances. It achieves a tenfold cross-validated, statistically significant improvement over current methods with a 0.7893 Dice coefficient. It also reaches a higher Intersection over Union and lower 95% Hausdorff distance metrics. An ablation study verifies the impact of key components.</p><p><strong>Conclusions: </strong>MD-SA2 displays strong potential for supporting the diagnosis and treatment planning of brain tumors. It may contribute to narrowing health inequities, especially in medically underserved areas where data quantity and quality limitations reduce the efficacy of traditional automated approaches.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024007"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12014943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144003508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Y Ohara, Vibujithan Vigneshwaran, Raissa Souza, Finn G Vamosi, Matthias Wilms, Nils D Forkert
{"title":"Dimensionality reduction in 3D causal deep learning for neuroimage generation: an evaluation study.","authors":"Erik Y Ohara, Vibujithan Vigneshwaran, Raissa Souza, Finn G Vamosi, Matthias Wilms, Nils D Forkert","doi":"10.1117/1.JMI.12.2.024506","DOIUrl":"10.1117/1.JMI.12.2.024506","url":null,"abstract":"<p><strong>Purpose: </strong>Causal deep learning (DL) using normalizing flows allows the generation of true counterfactual images, which is relevant for many medical applications such as explainability of decisions, image harmonization, and in-silico studies. However, such models are computationally expensive when applied directly to high-resolution 3D images and, therefore, require image dimensionality reduction (DR) to efficiently process the data. The goal of this work was to compare how different DR methods affect counterfactual neuroimage generation.</p><p><strong>Approach: </strong>Five DR techniques [2D principal component analysis (PCA), 2.5D PCA, 3D PCA, autoencoder, and Vector Quantised-Variational AutoEncoder] were applied to 23,692 3D brain images to create low-dimensional representations for causal DL model training. Convolutional neural networks were used to quantitatively evaluate age and sex changes on the counterfactual neuroimages. Age alterations were measured using the mean absolute error (MAE), whereas sex changes were assessed via classification accuracy.</p><p><strong>Results: </strong>The 2.5D PCA technique achieved the lowest MAE of 4.16 when changing the age variable of an original image. When sex was changed, the autoencoder embedding led to the highest classification accuracy of 97.84% while also significantly impacting the age variable predictions, increasing the MAE to 5.24 years. Overall, 3D PCA provided the best balance, with an age prediction MAE of 4.57 years while maintaining 94.01% sex classification accuracy when altering the age variable and 94.73% sex classification accuracy and the lowest age prediction MAE (3.84 years) when altering the sex variable.</p><p><strong>Conclusions: </strong>3D PCA appears to be the best-suited DR method for causal neuroimage analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024506"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12014944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2024 List of Reviewers.","authors":"","doi":"10.1117/1.JMI.12.1.010102","DOIUrl":"https://doi.org/10.1117/1.JMI.12.1.010102","url":null,"abstract":"<p><p>Thanks to reviewers who served the Journal of Medical Imaging in 2024.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 1","pages":"010102"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11753298/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingnan Song, Hao Wu, Juhwan Lee, Justin Kim, Ammar Hoori, Tao Hu, Vladislav Zimin, Mohamed Makhlouf, Sadeer Al-Kindi, Sanjay Rajagopalan, Chun-Ho Yun, Chung-Lieh Hung, David L Wilson
{"title":"Pericoronary adipose tissue feature analysis in computed tomography calcium score images in comparison to coronary computed tomography angiography.","authors":"Yingnan Song, Hao Wu, Juhwan Lee, Justin Kim, Ammar Hoori, Tao Hu, Vladislav Zimin, Mohamed Makhlouf, Sadeer Al-Kindi, Sanjay Rajagopalan, Chun-Ho Yun, Chung-Lieh Hung, David L Wilson","doi":"10.1117/1.JMI.12.1.014503","DOIUrl":"10.1117/1.JMI.12.1.014503","url":null,"abstract":"<p><strong>Purpose: </strong>We investigated the feasibility and advantages of using non-contrast CT calcium score (CTCS) images to assess pericoronary adipose tissue (PCAT) and its association with major adverse cardiovascular events (MACE). PCAT features from coronary computed tomography angiography (CCTA) have been shown to be associated with cardiovascular risk but are potentially confounded by iodine. If PCAT in CTCS images can be similarly analyzed, it would avoid this issue and enable its inclusion in formal risk assessment from readily available, low-cost CTCS images.</p><p><strong>Approach: </strong>To identify coronaries in CTCS images that have subtle visual evidence of vessels, we registered CTCS with paired CCTA images having coronary labels. We developed an \"axial-disk\" method giving regions for analyzing PCAT features in three main coronary arteries. We analyzed hand-crafted and radiomic features using univariate and multivariate logistic regression prediction of MACE and compared results against those from CCTA.</p><p><strong>Results: </strong>Registration accuracy was sufficient to enable the identification of PCAT regions in CTCS images. Motion or beam hardening artifacts were often prevalent in \"high-contrast\" CCTA but not CTCS. Mean HU and volume were increased in both CTCS and CCTA for the MACE group. There were significant positive correlations between some CTCS and CCTA features, suggesting that similar characteristics were obtained. Using hand-crafted/radiomics from CTCS and CCTA, AUCs were 0.83/0.79 and 0.83/0.77, respectively, whereas Agatston gave AUC = 0.73.</p><p><strong>Conclusions: </strong>Preliminarily, PCAT features can be assessed from three main coronary arteries in non-contrast CTCS images with performance characteristics that are at the very least comparable to CCTA.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 1","pages":"014503"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759132/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katrien Houbrechts, Nicholas Marshall, Lesley Cockmartin, Hilde Bosmans
{"title":"Evaluation of the flying focal spot technology in a wide-angle digital breast tomosynthesis system.","authors":"Katrien Houbrechts, Nicholas Marshall, Lesley Cockmartin, Hilde Bosmans","doi":"10.1117/1.JMI.12.S1.S13009","DOIUrl":"10.1117/1.JMI.12.S1.S13009","url":null,"abstract":"<p><strong>Purpose: </strong>We characterize the flying focal spot (FFS) technology in digital breast tomosynthesis (DBT), designed to overcome source motion blurring.</p><p><strong>Approach: </strong>A wide-angle DBT system with continuous gantry and focus motion (\"uncompensated focus\") and a system with FFS were compared for image sharpness and lesion detectability. The modulation transfer function (MTF) was assessed as a function of height in the projections and reconstructed images, along with lesion detectability using the contrast detail phantom for mammography (CDMAM) and the L1 phantom.</p><p><strong>Results: </strong>For the uncompensated focus system, the spatial frequency for 25% MTF value ( <math> <mrow><msub><mi>f</mi> <mrow><mn>25</mn> <mo>%</mo></mrow> </msub> </mrow> </math> ) measured at 2, 4, and 6 cm in DBT projections fell by 35%, 49%, and 59%, respectively in the tube-travel direction compared with the FFS system. There was no significant difference in <math> <mrow><msub><mi>f</mi> <mrow><mn>25</mn> <mo>%</mo></mrow> </msub> </mrow> </math> for the front-back and tube-travel directions for the FFS unit. The in-plane MTF in the tube-travel direction also improved with the FFS technology.The threshold gold thickness ( <math> <mrow><msub><mi>T</mi> <mi>t</mi></msub> </mrow> </math> ) for the 0.16-mm diameter discs of contrast detail phantom for mammography (CDMAM) improved for the FFS system in DBT mode, especially at greater heights above the table; <math> <mrow><msub><mi>T</mi> <mi>t</mi></msub> </mrow> </math> at 45 and 65 mm improved by 16% and 24%, respectively, compared with the uncompensated focus system. In addition, improvements in calcification and mass detection in a structured background were observed for DBT and synthetic mammography. The FFS system demonstrated faster scan times (4.8 s versus 21.7 s), potentially reducing patient motion artifacts.</p><p><strong>Conclusions: </strong>The FFS technology offers isotropic resolution, improved small detail detectability, and faster scan times in DBT mode compared with the traditional continuous gantry and focus motion approach.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 1","pages":"S13009"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11616485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}