Thomas Larsen, Hsin Wu Tseng, Rachawadee Trinate, Zhiyang Fu, Jing-Tzyh Alan Chiang, Andrew Karellas, Srinivasan Vedantham
{"title":"Maximizing microcalcification detectability in low-dose dedicated cone-beam breast CT: parallel cascades-based theoretical analysis.","authors":"Thomas Larsen, Hsin Wu Tseng, Rachawadee Trinate, Zhiyang Fu, Jing-Tzyh Alan Chiang, Andrew Karellas, Srinivasan Vedantham","doi":"10.1117/1.JMI.11.3.033501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.033501","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to determine the combination of X-ray spectrum and detector scintillator thickness that maximizes the detectability of microcalcification clusters in dedicated cone-beam breast CT.</p><p><strong>Approach: </strong>A cascaded linear system analysis was implemented in the spatial frequency domain and was used to determine the detectability index using numerical observers for the imaging task of detecting a microcalcification cluster with 0.17 mm diameter calcium carbonate spheres. The analysis considered a thallium-doped cesium iodide scintillator coupled to a complementary metal-oxide semiconductor detector and an analytical filtered-back-projection reconstruction algorithm. Independent system parameters considered were the scintillator thickness, applied X-ray tube voltage, and X-ray beam filtration. The combination of these parameters that maximized the detectability index was considered optimal.</p><p><strong>Results: </strong>Prewhitening, nonprewhitening, and nonprewhitening with eye filter numerical observers indicate that the combination of 0.525 to 0.6 mm thick scintillator, 70 kV, and 0.25 to 0.4 mm added copper filtration maximized the detectability index at a mean glandular dose (MGD) of 4.5 mGy.</p><p><strong>Conclusion: </strong>Using parallel cascade systems' analysis, the combination of parameters that could maximize the detection of microcalcifications was identified. The analysis indicates that a harder beam than that used in current practice may be beneficial for the task of detecting microcalcifications at an MGD suitable for breast cancer screening.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"033501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095120/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Tourniaire, Marius Ilie, Julien Mazières, Anna Vigier, François Ghiringhelli, Nicolas Piton, Jean-Christophe Sabourin, Frédéric Bibeau, Paul Hofman, Nicholas Ayache, Hervé Delingette
{"title":"WhARIO: whole-slide-image-based survival analysis for patients treated with immunotherapy.","authors":"Paul Tourniaire, Marius Ilie, Julien Mazières, Anna Vigier, François Ghiringhelli, Nicolas Piton, Jean-Christophe Sabourin, Frédéric Bibeau, Paul Hofman, Nicholas Ayache, Hervé Delingette","doi":"10.1117/1.JMI.11.3.037502","DOIUrl":"10.1117/1.JMI.11.3.037502","url":null,"abstract":"<p><strong>Purpose: </strong>Immune checkpoint inhibitors (ICIs) are now one of the standards of care for patients with lung cancer and have greatly improved both progression-free and overall survival, although <math><mrow><mo><</mo><mn>20</mn><mo>%</mo></mrow></math> of the patients respond to the treatment, and some face acute adverse events. Although a few predictive biomarkers have integrated the clinical workflow, they require additional modalities on top of whole-slide images and lack efficiency or robustness. In this work, we propose a biomarker of immunotherapy outcome derived solely from the analysis of histology slides.</p><p><strong>Approach: </strong>We develop a three-step framework, combining contrastive learning and nonparametric clustering to distinguish tissue patterns within the slides, before exploiting the adjacencies of previously defined regions to derive features and train a proportional hazards model for survival analysis. We test our approach on an in-house dataset of 193 patients from 5 medical centers and compare it with the gold standard tumor proportion score (TPS) biomarker.</p><p><strong>Results: </strong>On a fivefold cross-validation (CV) of the entire dataset, the whole-slide image-based survival analysis for patients treated with immunotherapy (WhARIO) features are able to separate a low- and a high-risk group of patients with a hazard ratio (HR) of 2.29 (<math><mrow><msub><mi>CI</mi><mn>95</mn></msub><mo>=</mo><mn>1.48</mn></mrow></math> to 3.56), whereas the TPS 1% reference threshold only reaches a HR of 1.81 (<math><mrow><msub><mi>CI</mi><mn>95</mn></msub><mo>=</mo><mn>1.21</mn></mrow></math> to 2.69). Combining the two yields a higher HR of 2.60 (<math><mrow><msub><mi>CI</mi><mn>95</mn></msub><mo>=</mo><mn>1.72</mn></mrow></math> to 3.94). Additional experiments on the same dataset, where one out of five centers is excluded from the CV and used as a test set, confirm these trends.</p><p><strong>Conclusions: </strong>Our uniquely designed WhARIO features are an efficient predictor of survival for lung cancer patients who received ICI treatment. We achieve similar performance to the current gold standard biomarker, without the need to access other imaging modalities, and show that both can be used together to reach even better results.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"037502"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11088447/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mostafa Abozeed, Kevin Junck, Seth Lirette, Tom Kimpe, Albert Xthona, Srini Tridandapani, Jordan Perchik
{"title":"Interpretation time efficiency with radiographs: a comparison study between standard 6 and 12 MP high-resolution display monitors.","authors":"Mostafa Abozeed, Kevin Junck, Seth Lirette, Tom Kimpe, Albert Xthona, Srini Tridandapani, Jordan Perchik","doi":"10.1117/1.JMI.11.3.035502","DOIUrl":"10.1117/1.JMI.11.3.035502","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this study is to compare interpretation efficiency of radiologists reading radiographs on 6 megapixel (MP) versus 12 MP monitors.</p><p><strong>Approach: </strong>Our method compares two sets of monitors in two phases: in phase I, radiologists interpreted using a 6 MP, 30.4 in. (Barco Coronis Fusion) and in phase II, a 12 MP, 30.9 in. (Barco Nio Fusion). Nine chest and three musculoskeletal radiologists each batch interpreted an average of 115 radiographs in phase I and 115 radiographs in phase II as a part of routine clinical work. Radiologists were blinded to monitor resolution.</p><p><strong>Results: </strong>Interpretation times per radiograph were noted from dictation logs. Interpretation time was significantly decreased utilizing a 12 MP monitor by 6.88 s ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.002</mn></mrow> </math> ) and 6.76 s (8.7%) ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) for chest radiographs only and combined chest and musculoskeletal radiographs, respectively. When evaluating musculoskeletal radiographs alone, the improvement in reading times with 12 MP monitor was 6.76 s, however, this difference was not statistically significant ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.111</mn></mrow> </math> ). Interpretation of radiographs on 12 MP monitors was 8.7% faster than on 6 MP monitors.</p><p><strong>Conclusion: </strong>Higher resolution diagnostic displays can enable radiologists to interpret radiographs more efficiently.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035502"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11193489/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141443536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kobe Bamps, Jeroen Bertels, Lennert Minten, Alexis Puvrez, Walter Coudyzer, Stijn De Buck, Joris Ector
{"title":"Phantom study of augmented reality framework to assist epicardial punctures.","authors":"Kobe Bamps, Jeroen Bertels, Lennert Minten, Alexis Puvrez, Walter Coudyzer, Stijn De Buck, Joris Ector","doi":"10.1117/1.JMI.11.3.035002","DOIUrl":"10.1117/1.JMI.11.3.035002","url":null,"abstract":"<p><strong>Purpose: </strong>The objective of this study is to evaluate the accuracy of an augmented reality (AR) system in improving guidance, accuracy, and visualization during the subxiphoidal approach for epicardial ablation.</p><p><strong>Approach: </strong>An AR application was developed to project real-time needle trajectories and patient-specific 3D organs using the Hololens 2. Additionally, needle tracking was implemented to offer real-time feedback to the operator, facilitating needle navigation. The AR application was evaluated through three different experiments: examining overlay accuracy, assessing puncture accuracy, and performing pre-clinical evaluations on a phantom.</p><p><strong>Results: </strong>The results of the overlay accuracy assessment for the AR system yielded <math><mrow><mn>2.36</mn><mo>±</mo><mn>2.04</mn><mtext> </mtext><mi>mm</mi></mrow></math>. Additionally, the puncture accuracy utilizing the AR system yielded <math><mrow><mn>1.02</mn><mo>±</mo><mn>2.41</mn><mtext> </mtext><mi>mm</mi></mrow></math>. During the pre-clinical evaluation on the phantom, needle puncture with AR guidance showed <math><mrow><mn>7.43</mn><mo>±</mo><mn>2.73</mn><mtext> </mtext><mi>mm</mi></mrow></math>, whereas needle puncture without AR guidance showed <math><mrow><mn>22.62</mn><mo>±</mo><mn>9.37</mn><mtext> </mtext><mi>mm</mi></mrow></math>.</p><p><strong>Conclusions: </strong>Overall, the AR platform has the potential to enhance the accuracy of percutaneous epicardial access for mapping and ablation of cardiac arrhythmias, thereby reducing complications and improving patient outcomes. The significance of this study lies in the potential of AR guidance to enhance the accuracy and safety of percutaneous epicardial access.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035002"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11135927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141181131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saahil Islam, Venkatesh N Murthy, Dominik Neumann, Badhan Kumar Das, Puneet Sharma, Andreas Maier, Dorin Comaniciu, Florin C Ghesu
{"title":"Self-supervised learning for interventional image analytics: toward robust device trackers.","authors":"Saahil Islam, Venkatesh N Murthy, Dominik Neumann, Badhan Kumar Das, Puneet Sharma, Andreas Maier, Dorin Comaniciu, Florin C Ghesu","doi":"10.1117/1.JMI.11.3.035001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.035001","url":null,"abstract":"<p><strong>Purpose: </strong>The accurate detection and tracking of devices, such as guiding catheters in live X-ray image acquisitions, are essential prerequisites for endovascular cardiac interventions. This information is leveraged for procedural guidance, e.g., directing stent placements. To ensure procedural safety and efficacy, there is a need for high robustness/no failures during tracking. To achieve this, one needs to efficiently tackle challenges, such as device obscuration by the contrast agent or other external devices or wires and changes in the field-of-view or acquisition angle, as well as the continuous movement due to cardiac and respiratory motion.</p><p><strong>Approach: </strong>To overcome the aforementioned challenges, we propose an approach to learn spatio-temporal features from a very large data cohort of over 16 million interventional X-ray frames using self-supervision for image sequence data. Our approach is based on a masked image modeling technique that leverages frame interpolation-based reconstruction to learn fine inter-frame temporal correspondences. The features encoded in the resulting model are fine-tuned downstream in a light-weight model.</p><p><strong>Results: </strong>Our approach achieves state-of-the-art performance, in particular for robustness, compared to ultra optimized reference solutions (that use multi-stage feature fusion or multi-task and flow regularization). The experiments show that our method achieves a 66.31% reduction in the maximum tracking error against the reference solutions (23.20% when flow regularization is used), achieving a success score of 97.95% at a <math><mrow><mn>3</mn><mo>×</mo></mrow></math> faster inference speed of 42 frames-per-second (on GPU). In addition, we achieve a 20% reduction in the standard deviation of errors, which indicates a much more stable tracking performance.</p><p><strong>Conclusions: </strong>The proposed data-driven approach achieves superior performance, particularly in robustness and speed compared with the frequently used multi-modular approaches for device tracking. The results encourage the use of our approach in various other tasks within interventional image analytics that require effective understanding of spatio-temporal semantics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035001"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11094643/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dallas K Tada, Pangyu Teng, Kalyani Vyapari, Ashley Banola, George Foster, Esteban Diaz, Grace Hyun J Kim, Jonathan G Goldin, Fereidoun Abtin, Michael McNitt-Gray, Matthew S Brown
{"title":"Quantifying lung fissure integrity using a three-dimensional patch-based convolutional neural network on CT images for emphysema treatment planning.","authors":"Dallas K Tada, Pangyu Teng, Kalyani Vyapari, Ashley Banola, George Foster, Esteban Diaz, Grace Hyun J Kim, Jonathan G Goldin, Fereidoun Abtin, Michael McNitt-Gray, Matthew S Brown","doi":"10.1117/1.JMI.11.3.034502","DOIUrl":"10.1117/1.JMI.11.3.034502","url":null,"abstract":"<p><strong>Purpose: </strong>Evaluation of lung fissure integrity is required to determine whether emphysema patients have complete fissures and are candidates for endobronchial valve (EBV) therapy. We propose a deep learning (DL) approach to segment fissures using a three-dimensional patch-based convolutional neural network (CNN) and quantitatively assess fissure integrity on CT to evaluate it in subjects with severe emphysema.</p><p><strong>Approach: </strong>From an anonymized image database of patients with severe emphysema, 129 CT scans were used. Lung lobe segmentations were performed to identify lobar regions, and the boundaries among these regions were used to construct approximate interlobar regions of interest (ROIs). The interlobar ROIs were annotated by expert image analysts to identify voxels where the fissure was present and create a reference ROI that excluded non-fissure voxels (where the fissure is incomplete). A CNN configured by nnU-Net was trained using 86 CT scans and their corresponding reference ROIs to segment the ROIs of left oblique fissure (LOF), right oblique fissure (ROF), and right horizontal fissure (RHF). For an independent test set of 43 cases, fissure integrity was quantified by mapping the segmented fissure ROI along the interlobar ROI. A fissure integrity score (FIS) was then calculated as the percentage of labeled fissure voxels divided by total voxels in the interlobar ROI. Predicted FIS (p-FIS) was quantified from the CNN output, and statistical analyses were performed comparing p-FIS and reference FIS (r-FIS).</p><p><strong>Results: </strong>The absolute percent error mean (±SD) between r-FIS and p-FIS for the test set was 4.0% (<math><mrow><mo>±</mo><mn>4.1</mn><mo>%</mo></mrow></math>), 6.0% (<math><mrow><mo>±</mo><mn>9.3</mn><mo>%</mo></mrow></math>), and 12.2% (<math><mrow><mo>±</mo><mn>12.5</mn><mo>%</mo></mrow></math>) for the LOF, ROF, and RHF, respectively.</p><p><strong>Conclusions: </strong>A DL approach was developed to segment lung fissures on CT images and accurately quantify FIS. It has potential to assist in the identification of emphysema patients who would benefit from EBV treatment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034502"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11135203/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141181153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breylon A Riley, Jack B Stevens, Xiang Li, Zhenyu Yang, Chunhao Wang, Yvonne M Mowery, David M Brizel, Fang-Fang Yin, Kyle J Lafata
{"title":"Prognostic value of different discretization parameters in <sup>18</sup>fluorodeoxyglucose positron emission tomography radiomics of oropharyngeal squamous cell carcinoma.","authors":"Breylon A Riley, Jack B Stevens, Xiang Li, Zhenyu Yang, Chunhao Wang, Yvonne M Mowery, David M Brizel, Fang-Fang Yin, Kyle J Lafata","doi":"10.1117/1.JMI.11.2.024007","DOIUrl":"10.1117/1.JMI.11.2.024007","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to interrogate the role of positron emission tomography (PET) image discretization parameters on the prognostic value of radiomic features in patients with oropharyngeal cancer.</p><p><strong>Approach: </strong>A prospective clinical trial (NCT01908504) enrolled patients with oropharyngeal squamous cell carcinoma (<math><mrow><mi>N</mi><mo>=</mo><mn>69</mn></mrow></math>; mixed HPV status) undergoing definitive radiotherapy and evaluated intra-treatment <sup>18</sup>fluorodeoxyglucose PET as a potential imaging biomarker of early metabolic response. The primary tumor volume was manually segmented by a radiation oncologist on PET/CT images acquired two weeks into treatment (20 Gy). From this, 54 radiomic texture features were extracted. Two image discretization techniques-fixed bin number (FBN) and fixed bin size (FBS)-were considered to evaluate systematic changes in the bin number ({32, 64, 128, 256} gray levels) and bin size ({0.10, 0.15, 0.22, 0.25} bin-widths). For each discretization-specific radiomic feature space, an LASSO-regularized logistic regression model was independently trained to predict residual and/or recurrent disease. The model training was based on Monte Carlo cross-validation with a 20% testing hold-out, 50 permutations, and minor-class up-sampling to account for imbalanced outcomes data. Performance differences among the discretization-specific models were quantified via receiver operating characteristic curve analysis. A final parameter-optimized logistic regression model was developed by incorporating different settings parameterizations into the same model.</p><p><strong>Results: </strong>FBN outperformed FBS in predicting residual and/or recurrent disease. The four FBN models achieved AUC values of 0.63, 0.61, 0.65, and 0.62 for 32, 64, 128, and 256 gray levels, respectively. By contrast, the average AUC of the four FBS models was 0.53. The parameter-optimized model, comprising features joint entropy (FBN = 64) and information measure correlation 1 (FBN = 128), achieved an AUC of 0.70. Kaplan-Meier analyses identified these features to be associated with disease-free survival (<math><mrow><mi>p</mi><mo>=</mo><mn>0.0158</mn></mrow></math> and <math><mrow><mi>p</mi><mo>=</mo><mn>0.0180</mn></mrow></math>, respectively; log-rank test).</p><p><strong>Conclusions: </strong>Our findings suggest that the prognostic value of individual radiomic features may depend on feature-specific discretization parameter settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024007"},"PeriodicalIF":1.9,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10966359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140319488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Kar, Michael V Cohen, Samuel A McQuiston, Teja Poorsala, Christopher M Malozzi
{"title":"Automated segmentation of the left-ventricle from MRI with a fully convolutional network to investigate CTRCD in breast cancer patients.","authors":"Julia Kar, Michael V Cohen, Samuel A McQuiston, Teja Poorsala, Christopher M Malozzi","doi":"10.1117/1.JMI.11.2.024003","DOIUrl":"10.1117/1.JMI.11.2.024003","url":null,"abstract":"<p><p><b>Purpose:</b> The goal of this study was to develop a fully convolutional network (FCN) tool to automatedly segment the left-ventricular (LV) myocardium in displacement encoding with stimulated echoes MRI. The segmentation results are used for LV chamber quantification and strain analyses in breast cancer patients susceptible to cancer therapy-related cardiac dysfunction (CTRCD). <b>Approach:</b> A DeepLabV3+ FCN with a ResNet-101 backbone was custom-designed to conduct chamber quantification on 45 female breast cancer datasets (23 training, 11 validation, and 11 test sets). LV structural parameters and LV ejection fraction (LVEF) were measured, and myocardial strains estimated with the radial point interpolation method. Myocardial classification validation was against quantization-based ground-truth with computations of accuracy, Dice score, average perpendicular distance (APD), Hausdorff-distance, and others. Additional validations were conducted with equivalence tests and Cronbach's alpha (<math><mrow><mi>C</mi><mtext>-</mtext><mi>α</mi></mrow></math>) intraclass correlation coefficients between the FCN and a vendor tool on chamber quantification and myocardial strain computations. <b>Results:</b> Myocardial classification results against ground-truth were <math><mrow><mtext>Dice</mtext><mo>=</mo><mn>0.89</mn></mrow></math>, <math><mrow><mi>APD</mi><mo>=</mo><mn>2.4</mn><mtext> </mtext><mi>mm</mi></mrow></math>, and <math><mrow><mtext>accuracy</mtext><mo>=</mo><mn>97</mn><mo>%</mo></mrow></math> for the validation set and <math><mrow><mtext>Dice</mtext><mo>=</mo><mn>0.90</mn></mrow></math>, <math><mrow><mi>APD</mi><mo>=</mo><mn>2.5</mn><mtext> </mtext><mi>mm</mi></mrow></math>, and <math><mrow><mtext>accuracy</mtext><mo>=</mo><mn>97</mn><mo>%</mo></mrow></math> for the test set. The confidence intervals (CI) and two one-sided t-test results of equivalence tests between the FCN and vendor-tool were <math><mrow><mi>CI</mi><mo>=</mo><mo>-</mo><mn>1.36</mn><mo>%</mo></mrow></math> to 2.42%, p-value < 0.001 for LVEF (<math><mrow><mn>58</mn><mo>±</mo><mn>5</mn><mo>%</mo></mrow></math> versus <math><mrow><mn>57</mn><mo>±</mo><mn>6</mn><mo>%</mo></mrow></math>), and <math><mrow><mi>CI</mi><mo>=</mo><mo>-</mo><mn>0.71</mn><mo>%</mo></mrow></math> to 0.63%, p-value < 0.001 for longitudinal strain (<math><mrow><mo>-</mo><mn>15</mn><mo>±</mo><mn>2</mn><mo>%</mo></mrow></math> versus <math><mrow><mo>-</mo><mn>15</mn><mo>±</mo><mn>3</mn><mo>%</mo></mrow></math>). <b>Conclusions:</b> The validation results were found equivalent to the vendor tool-based parameter estimates, which show that accurate LV chamber quantification followed by strain analysis for CTRCD investigation can be achieved with our proposed FCN methodology.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024003"},"PeriodicalIF":1.9,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10950093/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Yu, Qi Yang, Yucheng Tang, Riqiang Gao, Shunxing Bao, Leon Y Cai, Ho Hin Lee, Yuankai Huo, Ann Zenobia Moore, Luigi Ferrucci, Bennett A Landman
{"title":"Deep conditional generative model for longitudinal single-slice abdominal computed tomography harmonization.","authors":"Xin Yu, Qi Yang, Yucheng Tang, Riqiang Gao, Shunxing Bao, Leon Y Cai, Ho Hin Lee, Yuankai Huo, Ann Zenobia Moore, Luigi Ferrucci, Bennett A Landman","doi":"10.1117/1.JMI.11.2.024008","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024008","url":null,"abstract":"<p><strong>Purpose: </strong>Two-dimensional single-slice abdominal computed tomography (CT) provides a detailed tissue map with high resolution allowing quantitative characterization of relationships between health conditions and aging. However, longitudinal analysis of body composition changes using these scans is difficult due to positional variation between slices acquired in different years, which leads to different organs/tissues being captured.</p><p><strong>Approach: </strong>To address this issue, we propose C-SliceGen, which takes an arbitrary axial slice in the abdominal region as a condition and generates a pre-defined vertebral level slice by estimating structural changes in the latent space.</p><p><strong>Results: </strong>Our experiments on 2608 volumetric CT data from two in-house datasets and 50 subjects from the 2015 Multi-Atlas Abdomen Labeling Challenge Beyond the Cranial Vault (BTCV) dataset demonstrate that our model can generate high-quality images that are realistic and similar. We further evaluate our method's capability to harmonize longitudinal positional variation on 1033 subjects from the Baltimore longitudinal study of aging dataset, which contains longitudinal single abdominal slices, and confirmed that our method can harmonize the slice positional variance in terms of visceral fat area.</p><p><strong>Conclusion: </strong>This approach provides a promising direction for mapping slices from different vertebral levels to a target slice and reducing positional variance for single-slice longitudinal analysis. The source code is available at: https://github.com/MASILab/C-SliceGen.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024008"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10987005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140870458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Welcome to the second issue of the <i>Journal of Medical Imaging</i> (JMI) for the 2024 year!","authors":"","doi":"10.1117/1.JMI.11.2.020101","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.020101","url":null,"abstract":"<p><p>Editor-in-Chief Bennett A. Landman (Vanderbilt University) provides opening remarks for the current issue of JMI, with specific commentary on medical imaging community \"challenges\" and their potential to coalesce creative energies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"020101"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11057461/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140873053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}