Journal of Medical Imaging最新文献

筛选
英文 中文
Validation of ultrasound velocimetry and computational fluid dynamics for flow assessment in femoral artery stenotic disease 验证超声测速和计算流体动力学对股动脉狭窄疾病的血流评估效果
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-16 DOI: 10.1117/1.jmi.11.3.037001
L. van de Velde, M. van Helvert, Stefan Engelhard, Ashkan Ghanbarzadeh-Dagheyan, Hadi Mirgolbabaee, J. Voorneveld, G. Lajoinie, M. Versluis, Michel M. P. J. Reijnen, E. Groot Jebbink
{"title":"Validation of ultrasound velocimetry and computational fluid dynamics for flow assessment in femoral artery stenotic disease","authors":"L. van de Velde, M. van Helvert, Stefan Engelhard, Ashkan Ghanbarzadeh-Dagheyan, Hadi Mirgolbabaee, J. Voorneveld, G. Lajoinie, M. Versluis, Michel M. P. J. Reijnen, E. Groot Jebbink","doi":"10.1117/1.jmi.11.3.037001","DOIUrl":"https://doi.org/10.1117/1.jmi.11.3.037001","url":null,"abstract":"","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140966998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-source graphical user interface for the creation of synthetic skeletons for medical image analysis. 用于创建医学图像分析合成骨架的开源图形用户界面。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-14 DOI: 10.1117/1.JMI.11.3.036001
Christian Herz, Nicolas Vergnet, Sijie Tian, Abdullah H Aly, Matthew A Jolley, Nathanael Tran, Gabriel Arenas, Andras Lasso, Nadav Schwartz, Kathleen E O'Neill, Paul A Yushkevich, Alison M Pouch
{"title":"Open-source graphical user interface for the creation of synthetic skeletons for medical image analysis.","authors":"Christian Herz, Nicolas Vergnet, Sijie Tian, Abdullah H Aly, Matthew A Jolley, Nathanael Tran, Gabriel Arenas, Andras Lasso, Nadav Schwartz, Kathleen E O'Neill, Paul A Yushkevich, Alison M Pouch","doi":"10.1117/1.JMI.11.3.036001","DOIUrl":"10.1117/1.JMI.11.3.036001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable medial modeling is an inverse skeletonization approach to representing anatomy in medical images, which can be used for statistical shape analysis and assessment of patient-specific anatomical features such as locally varying thickness. It involves deforming a pre-defined synthetic skeleton, or template, to anatomical structures of the same class. The lack of software for creating such skeletons has been a limitation to more widespread use of deformable medial modeling. Therefore, the objective of this work is to present an open-source user interface (UI) for the creation of synthetic skeletons for a range of medial modeling applications in medical imaging.</p><p><strong>Approach: </strong>A UI for interactive design of synthetic skeletons was implemented in 3D Slicer, an open-source medical image analysis application. The steps in synthetic skeleton design include importation and skeletonization of a 3D segmentation, followed by interactive 3D point placement and triangulation of the medial surface such that the desired branching configuration of the anatomical structure's medial axis is achieved. Synthetic skeleton design was evaluated in five clinical applications. Compatibility of the synthetic skeletons with open-source software for deformable medial modeling was tested, and representational accuracy of the deformed medial models was evaluated.</p><p><strong>Results: </strong>Three users designed synthetic skeletons of anatomies with various topologies: the placenta, aortic root wall, mitral valve, cardiac ventricles, and the uterus. The skeletons were compatible with skeleton-first and boundary-first software for deformable medial modeling. The fitted medial models achieved good representational accuracy with respect to the 3D segmentations from which the synthetic skeletons were generated.</p><p><strong>Conclusions: </strong>Synthetic skeleton design has been a practical challenge in leveraging deformable medial modeling for new clinical applications. This work demonstrates an open-source UI for user-friendly design of synthetic skeletons for anatomies with a wide range of topologies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Networking Science and Technology: Highlights from JMI Issue 3. 网络科学与技术:JMI 第三期要闻。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-06-26 DOI: 10.1117/1.JMI.11.3.030101
Bennett Landman
{"title":"Networking Science and Technology: Highlights from JMI Issue 3.","authors":"Bennett Landman","doi":"10.1117/1.JMI.11.3.030101","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.030101","url":null,"abstract":"<p><p>The editorial introduces JMI Issue 3 Volume 11, looks ahead to SPIE Medical Imaging, and highlights the journal's policy on conference article submission.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11200196/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141471596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fiberscopic pattern removal for optimal coverage in 3D bladder reconstructions of fiberscope cystoscopy videos. 在纤维膀胱镜检查视频的三维膀胱重建中去除纤维图案以实现最佳覆盖。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-17 DOI: 10.1117/1.JMI.11.3.034002
Rachel Eimen, Halina Krzyzanowska, Kristen R Scarpato, Audrey K Bowden
{"title":"Fiberscopic pattern removal for optimal coverage in 3D bladder reconstructions of fiberscope cystoscopy videos.","authors":"Rachel Eimen, Halina Krzyzanowska, Kristen R Scarpato, Audrey K Bowden","doi":"10.1117/1.JMI.11.3.034002","DOIUrl":"10.1117/1.JMI.11.3.034002","url":null,"abstract":"<p><strong>Purpose: </strong>In the current clinical standard of care, cystoscopic video is not routinely saved because it is cumbersome to review. Instead, clinicians rely on brief procedure notes and still frames to manage bladder pathology. Preserving discarded data via 3D reconstructions, which are convenient to review, has the potential to improve patient care. However, many clinical videos are collected by fiberscopes, which are lower cost but induce a pattern on frames that inhibit 3D reconstruction. The aim of our study is to remove the honeycomb-like pattern present in fiberscope-based cystoscopy videos to improve the quality of 3D bladder reconstructions.</p><p><strong>Approach: </strong>Our study introduces an algorithm that applies a notch filtering mask in the Fourier domain to remove the honeycomb-like pattern from clinical cystoscopy videos collected by fiberscope as a preprocessing step to 3D reconstruction. We produce 3D reconstructions with the video before and after removing the pattern, which we compare with a metric termed the area of reconstruction coverage (<math><mrow><msub><mrow><mi>A</mi></mrow><mrow><mi>RC</mi></mrow></msub></mrow></math>), defined as the surface area (in pixels) of the reconstructed bladder. All statistical analyses use paired <math><mrow><mi>t</mi></mrow></math>-tests.</p><p><strong>Results: </strong>Preprocessing using our method for pattern removal enabled reconstruction for all (<math><mrow><mi>n</mi><mo>=</mo><mn>5</mn></mrow></math>) cystoscopy videos included in the study and produced a statistically significant increase in bladder coverage (<math><mrow><mi>p</mi><mo>=</mo><mn>0.018</mn></mrow></math>).</p><p><strong>Conclusions: </strong>This algorithm for pattern removal increases bladder coverage in 3D reconstructions and automates mask generation and application, which could aid implementation in time-starved clinical environments. The creation and use of 3D reconstructions can improve documentation of cystoscopic findings for future surgical navigation, thus improving patient treatment and outcomes.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11099938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computerized assessment of background parenchymal enhancement on breast dynamic contrast-enhanced-MRI including electronic lesion removal. 乳腺动态对比增强型核磁共振成像(包括电子病灶清除)背景实质增强的计算机化评估。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-02 DOI: 10.1117/1.JMI.11.3.034501
Lindsay Douglas, Jordan Fuhrman, Qiyuan Hu, Alexandra Edwards, Deepa Sheth, Hiroyuki Abe, Maryellen Giger
{"title":"Computerized assessment of background parenchymal enhancement on breast dynamic contrast-enhanced-MRI including electronic lesion removal.","authors":"Lindsay Douglas, Jordan Fuhrman, Qiyuan Hu, Alexandra Edwards, Deepa Sheth, Hiroyuki Abe, Maryellen Giger","doi":"10.1117/1.JMI.11.3.034501","DOIUrl":"10.1117/1.JMI.11.3.034501","url":null,"abstract":"<p><strong>Purpose: </strong>Current clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels.</p><p><strong>Approach: </strong>A U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy <math><mrow><mi>c</mi></mrow></math>-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings.</p><p><strong>Results: </strong>Statistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, <math><mrow><mi>p</mi><mo><</mo><mn>0.001</mn></mrow></math>). Scores from all breast regions performed significantly better than guessing (<math><mrow><mi>p</mi><mo><</mo><mn>0.025</mn></mrow></math> from the <math><mrow><mi>z</mi></mrow></math>-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points.</p><p><strong>Conclusions: </strong>Results demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can processed images be used to determine the modulation transfer function and detective quantum efficiency? 经过处理的图像能否用于确定调制传递函数和探测器的量子效率?
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-31 DOI: 10.1117/1.JMI.11.3.033502
Lisa M Garland, Haechan J Yang, Paul A Picot, Jesse Tanguay, Ian A Cunningham
{"title":"Can processed images be used to determine the modulation transfer function and detective quantum efficiency?","authors":"Lisa M Garland, Haechan J Yang, Paul A Picot, Jesse Tanguay, Ian A Cunningham","doi":"10.1117/1.JMI.11.3.033502","DOIUrl":"10.1117/1.JMI.11.3.033502","url":null,"abstract":"<p><strong>Purpose: </strong>The modulation transfer function (MTF) and detective quantum efficiency (DQE) of x-ray detectors are key Fourier metrics of performance, valid only for linear and shift-invariant (LSI) systems and generally measured following IEC guidelines requiring the use of raw (unprocessed) image data. However, many detectors incorporate processing in the imaging chain that is difficult or impossible to disable, raising questions about the practical relevance of MTF and DQE testing. We investigate the impact of convolution-based embedded processing on MTF and DQE measurements.</p><p><strong>Approach: </strong>We use an impulse-sampled notation, consistent with a cascaded-systems analysis in spatial and spatial-frequency domains to determine the impact of discrete convolution (DC) on measured MTF and DQE following IEC guidelines.</p><p><strong>Results: </strong>We show that digital systems remain LSI if we acknowledge both image pixel values and convolution kernels represent scaled Dirac <math><mrow><mi>δ</mi></mrow></math>-functions with an implied sinc convolution of image data. This enables use of the Fourier transform (FT) to determine impact on presampling MTF and DQE measurements.</p><p><strong>Conclusions: </strong>It is concluded that: (i) the MTF of DC is always an unbounded cosine series; (ii) the slanted-edge method yields the true presampling MTF, even when using processed images, with processing appearing as an analytic filter with cosine-series MTF applied to raw presampling image data; (iii) the DQE is unaffected by discrete-convolution-based processing with a possible exception near zero-points in the presampling MTF; and (iv) the FT of the impulse-sampled notation is equivalent to the <math><mrow><mi>Z</mi></mrow></math> transform of image data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140480/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic lesion detection for narrow-band imaging bronchoscopy. 窄带成像支气管镜的自动病灶检测。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-30 DOI: 10.1117/1.JMI.11.3.036002
Vahid Daneshpajooh, Danish Ahmad, Jennifer Toth, Rebecca Bascom, William E Higgins
{"title":"Automatic lesion detection for narrow-band imaging bronchoscopy.","authors":"Vahid Daneshpajooh, Danish Ahmad, Jennifer Toth, Rebecca Bascom, William E Higgins","doi":"10.1117/1.JMI.11.3.036002","DOIUrl":"10.1117/1.JMI.11.3.036002","url":null,"abstract":"<p><strong>Purpose: </strong>Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution.</p><p><strong>Approach: </strong>Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions.</p><p><strong>Results: </strong>Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach.</p><p><strong>Conclusion: </strong>The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11138083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiresolution semantic segmentation of biological structures in digital histopathology. 数字组织病理学中生物结构的多分辨率语义分割。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-09 DOI: 10.1117/1.JMI.11.3.037501
Sina Salsabili, Adrian D C Chan, Eranga Ukwatta
{"title":"Multiresolution semantic segmentation of biological structures in digital histopathology.","authors":"Sina Salsabili, Adrian D C Chan, Eranga Ukwatta","doi":"10.1117/1.JMI.11.3.037501","DOIUrl":"10.1117/1.JMI.11.3.037501","url":null,"abstract":"<p><strong>Purpose: </strong>Semantic segmentation in high-resolution, histopathology whole slide images (WSIs) is an important fundamental task in various pathology applications. Convolutional neural networks (CNN) are the state-of-the-art approach for image segmentation. A patch-based CNN approach is often employed because of the large size of WSIs; however, segmentation performance is sensitive to the field-of-view and resolution of the input patches, and balancing the trade-offs is challenging when there are drastic size variations in the segmented structures. We propose a multiresolution semantic segmentation approach, which is capable of addressing the threefold trade-off between field-of-view, computational efficiency, and spatial resolution in histopathology WSIs.</p><p><strong>Approach: </strong>We propose a two-stage multiresolution approach for semantic segmentation of histopathology WSIs of mouse lung tissue and human placenta. In the first stage, we use four different CNNs to extract the contextual information from input patches at four different resolutions. In the second stage, we use another CNN to aggregate the extracted information in the first stage and generate the final segmentation masks.</p><p><strong>Results: </strong>The proposed method reported 95.6%, 92.5%, and 97.1% in our single-class placenta dataset and 97.1%, 87.3%, and 83.3% in our multiclass lung dataset for pixel-wise accuracy, mean Dice similarity coefficient, and mean positive predictive value, respectively.</p><p><strong>Conclusions: </strong>The proposed multiresolution approach demonstrated high accuracy and consistency in the semantic segmentation of biological structures of different sizes in our single-class placenta and multiclass lung histopathology WSI datasets. Our study can potentially be used in automated analysis of biological structures, facilitating the clinical research in histopathology applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography. 在 CT 血管造影中自动提取和标记冠状动脉树的图神经网络。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 DOI: 10.1117/1.JMI.11.3.034001
Nils Hampe, S. G. van Velzen, J. Wolterink, Carlos Collet, José P. Henriques, Nils Planken, I. Išgum
{"title":"Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography.","authors":"Nils Hampe, S. G. van Velzen, J. Wolterink, Carlos Collet, José P. Henriques, Nils Planken, I. Išgum","doi":"10.1117/1.JMI.11.3.034001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.034001","url":null,"abstract":"Purpose\u0000Automatic comprehensive reporting of coronary artery disease (CAD) requires anatomical localization of the coronary artery pathologies. To address this, we propose a fully automatic method for extraction and anatomical labeling of the coronary artery tree using deep learning.\u0000\u0000\u0000Approach\u0000We include coronary CT angiography (CCTA) scans of 104 patients from two hospitals. Reference annotations of coronary artery tree centerlines and labels of coronary artery segments were assigned to 10 segment classes following the American Heart Association guidelines. Our automatic method first extracts the coronary artery tree from CCTA, automatically placing a large number of seed points and simultaneous tracking of vessel-like structures from these points. Thereafter, the extracted tree is refined to retain coronary arteries only, which are subsequently labeled with a multi-resolution ensemble of graph convolutional neural networks that combine geometrical and image intensity information from adjacent segments.\u0000\u0000\u0000Results\u0000The method is evaluated on its ability to extract the coronary tree and to label its segments, by comparing the automatically derived and the reference labels. A separate assessment of tree extraction yielded an F1 score of 0.85. Evaluation of our combined method leads to an average F1 score of 0.74.\u0000\u0000\u0000Conclusions\u0000The results demonstrate that our method enables fully automatic extraction and anatomical labeling of coronary artery trees from CCTA scans. Therefore, it has the potential to facilitate detailed automatic reporting of CAD.","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141028339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography. 在 CT 血管造影中自动提取和标记冠状动脉树的图神经网络。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-15 DOI: 10.1117/1.JMI.11.3.034001
Nils Hampe, Sanne G M van Velzen, Jelmer M Wolterink, Carlos Collet, José P S Henriques, Nils Planken, Ivana Išgum
{"title":"Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography.","authors":"Nils Hampe, Sanne G M van Velzen, Jelmer M Wolterink, Carlos Collet, José P S Henriques, Nils Planken, Ivana Išgum","doi":"10.1117/1.JMI.11.3.034001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.034001","url":null,"abstract":"<p><strong>Purpose: </strong>Automatic comprehensive reporting of coronary artery disease (CAD) requires anatomical localization of the coronary artery pathologies. To address this, we propose a fully automatic method for extraction and anatomical labeling of the coronary artery tree using deep learning.</p><p><strong>Approach: </strong>We include coronary CT angiography (CCTA) scans of 104 patients from two hospitals. Reference annotations of coronary artery tree centerlines and labels of coronary artery segments were assigned to 10 segment classes following the American Heart Association guidelines. Our automatic method first extracts the coronary artery tree from CCTA, automatically placing a large number of seed points and simultaneous tracking of vessel-like structures from these points. Thereafter, the extracted tree is refined to retain coronary arteries only, which are subsequently labeled with a multi-resolution ensemble of graph convolutional neural networks that combine geometrical and image intensity information from adjacent segments.</p><p><strong>Results: </strong>The method is evaluated on its ability to extract the coronary tree and to label its segments, by comparing the automatically derived and the reference labels. A separate assessment of tree extraction yielded an <math><mrow><mi>F</mi><mn>1</mn></mrow></math> score of 0.85. Evaluation of our combined method leads to an average <math><mrow><mi>F</mi><mn>1</mn></mrow></math> score of 0.74.</p><p><strong>Conclusions: </strong>The results demonstrate that our method enables fully automatic extraction and anatomical labeling of coronary artery trees from CCTA scans. Therefore, it has the potential to facilitate detailed automatic reporting of CAD.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095121/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信