Journal of Medical Imaging最新文献

筛选
英文 中文
Improving radiological quantification of levator hiatus features with measures informed by statistical shape modeling. 利用统计形状建模方法改进左肌裂孔特征的放射学量化。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-08-10 DOI: 10.1117/1.JMI.11.4.045001
Vincenzia S Vargo, Megan R Routzong, Pamela A Moalli, Ghazaleh Rostaminia, Steven D Abramowitch
{"title":"Improving radiological quantification of levator hiatus features with measures informed by statistical shape modeling.","authors":"Vincenzia S Vargo, Megan R Routzong, Pamela A Moalli, Ghazaleh Rostaminia, Steven D Abramowitch","doi":"10.1117/1.JMI.11.4.045001","DOIUrl":"10.1117/1.JMI.11.4.045001","url":null,"abstract":"<p><strong>Purpose: </strong>The measures that traditionally describe the levator hiatus (LH) are straightforward and reliable; however, they were not specifically designed to capture significant differences. Statistical shape modeling (SSM) was used to quantify LH shape variation across reproductive-age women and identify novel variables associated with LH size and shape.</p><p><strong>Approach: </strong>A retrospective study of pelvic MRIs from 19 nulliparous, 32 parous, and 12 pregnant women was performed. The LH was segmented in the plane of minimal LH dimensions. SSM was implemented. LH size was defined by the cross-sectional area, maximal transverse diameter, and anterior-posterior (A-P) diameter. Novel SSM-guided variables were defined by regions of greatest variation. Multivariate analysis of variance (MANOVA) evaluated group differences, and correlations determined relationships between size and shape variables.</p><p><strong>Results: </strong>Overall shape ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), SSM mode 2 (oval to <math><mrow><mi>T</mi></mrow> </math> -shape, <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.002</mn></mrow> </math> ), mode 3 (rounder to broader anterior shape, <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.004</mn></mrow> </math> ), and maximal transverse diameter ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.003</mn></mrow> </math> ) significantly differed between groups. Novel anterior and posterior transverse diameters were identified at 14% and 79% of the A-P length. Anterior transverse diameter and maximal transverse diameter were strongly correlated ( <math><mrow><mi>r</mi> <mo>=</mo> <mn>0.780</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), while posterior transverse diameter and maximal transverse diameter were weakly correlated ( <math><mrow><mi>r</mi> <mo>=</mo> <mn>0.398</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.001</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>The traditional maximal transverse diameter generally corresponded with SSM findings but cannot describe anterior and posterior variation independently. The novel anterior and posterior transverse diameters represent both size and shape variation, can be easily calculated alongside traditional measures, and are more sensitive to subtle and local LH variation. Thus, they have a greater ability to serve as predictive and diagnostic parameters.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045001"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11316399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Greater benefits of deep learning-based computer-aided detection systems for finding small signals in 3D volumetric medical images. 基于深度学习的计算机辅助检测系统在三维容积医学图像中发现小信号的更大优势。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-07-09 DOI: 10.1117/1.JMI.11.4.045501
Devi S Klein, Srijita Karmakar, Aditya Jonnalagadda, Craig K Abbey, Miguel P Eckstein
{"title":"Greater benefits of deep learning-based computer-aided detection systems for finding small signals in 3D volumetric medical images.","authors":"Devi S Klein, Srijita Karmakar, Aditya Jonnalagadda, Craig K Abbey, Miguel P Eckstein","doi":"10.1117/1.JMI.11.4.045501","DOIUrl":"10.1117/1.JMI.11.4.045501","url":null,"abstract":"<p><strong>Purpose: </strong>Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors.</p><p><strong>Approach: </strong>Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC).</p><p><strong>Results: </strong>The CNN-CADe improved the 3D search for the small microcalcification signal ( <math><mrow><mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.098</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.0002</mn></mrow> </math> ) and the 2D search for the large mass signal ( <math><mrow><mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.076</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.002</mn></mrow> </math> ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( <math><mrow><mi>Δ</mi> <mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.066</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.035</mn></mrow> </math> ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( <math><mrow><mi>r</mi> <mo>=</mo> <mo>-</mo> <mn>0.528</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.036</mn></mrow> </math> ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( <math><mrow><mi>Δ</mi> <mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.033</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.133</mn></mrow> </math> ).</p><p><strong>Conclusion: </strong>The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045501"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning three-dimensional aortic root assessment based on sparse annotations. 基于稀疏注释学习三维主动脉根评估
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-07-30 DOI: 10.1117/1.JMI.11.4.044504
Johanna Brosig, Nina Krüger, Inna Khasyanova, Isaac Wamala, Matthias Ivantsits, Simon Sündermann, Jörg Kempfert, Stefan Heldmann, Anja Hennemuth
{"title":"Learning three-dimensional aortic root assessment based on sparse annotations.","authors":"Johanna Brosig, Nina Krüger, Inna Khasyanova, Isaac Wamala, Matthias Ivantsits, Simon Sündermann, Jörg Kempfert, Stefan Heldmann, Anja Hennemuth","doi":"10.1117/1.JMI.11.4.044504","DOIUrl":"10.1117/1.JMI.11.4.044504","url":null,"abstract":"<p><strong>Purpose: </strong>Analyzing the anatomy of the aorta and left ventricular outflow tract (LVOT) is crucial for risk assessment and planning of transcatheter aortic valve implantation (TAVI). A comprehensive analysis of the aortic root and LVOT requires the extraction of the patient-individual anatomy via segmentation. Deep learning has shown good performance on various segmentation tasks. If this is formulated as a supervised problem, large amounts of annotated data are required for training. Therefore, minimizing the annotation complexity is desirable.</p><p><strong>Approach: </strong>We propose two-dimensional (2D) cross-sectional annotation and point cloud-based surface reconstruction to train a fully automatic 3D segmentation network for the aortic root and the LVOT. Our sparse annotation scheme enables easy and fast training data generation for tubular structures such as the aortic root. From the segmentation results, we derive clinically relevant parameters for TAVI planning.</p><p><strong>Results: </strong>The proposed 2D cross-sectional annotation results in high inter-observer agreement [Dice similarity coefficient (DSC): 0.94]. The segmentation model achieves a DSC of 0.90 and an average surface distance of 0.96 mm. Our approach achieves an aortic annulus maximum diameter difference between prediction and annotation of 0.45 mm (inter-observer variance: 0.25 mm).</p><p><strong>Conclusions: </strong>The presented approach facilitates reproducible annotations. The annotations allow for training accurate segmentation models of the aortic root and LVOT. The segmentation results facilitate reproducible and quantifiable measurements for TAVI planning.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044504"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11287057/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing patterns of diffusion tensor imaging variance in aging brains. 老化大脑中弥散张量成像差异模式的特征。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-07-01 Epub Date: 2024-08-24 DOI: 10.1117/1.JMI.11.4.044007
Chenyu Gao, Qi Yang, Michael E Kim, Nazirah Mohd Khairi, Leon Y Cai, Nancy R Newlin, Praitayini Kanakaraj, Lucas W Remedios, Aravind R Krishnan, Xin Yu, Tianyuan Yao, Panpan Zhang, Kurt G Schilling, Daniel Moyer, Derek B Archer, Susan M Resnick, Bennett A Landman
{"title":"Characterizing patterns of diffusion tensor imaging variance in aging brains.","authors":"Chenyu Gao, Qi Yang, Michael E Kim, Nazirah Mohd Khairi, Leon Y Cai, Nancy R Newlin, Praitayini Kanakaraj, Lucas W Remedios, Aravind R Krishnan, Xin Yu, Tianyuan Yao, Panpan Zhang, Kurt G Schilling, Daniel Moyer, Derek B Archer, Susan M Resnick, Bennett A Landman","doi":"10.1117/1.JMI.11.4.044007","DOIUrl":"10.1117/1.JMI.11.4.044007","url":null,"abstract":"<p><strong>Purpose: </strong>As large analyses merge data across sites, a deeper understanding of variance in statistical assessment across the sources of data becomes critical for valid analyses. Diffusion tensor imaging (DTI) exhibits spatially varying and correlated noise, so care must be taken with distributional assumptions. Here, we characterize the role of physiology, subject compliance, and the interaction of the subject with the scanner in the understanding of DTI variability, as modeled in the spatial variance of derived metrics in homogeneous regions.</p><p><strong>Approach: </strong>We analyze DTI data from 1035 subjects in the Baltimore Longitudinal Study of Aging, with ages ranging from 22.4 to 103 years old. For each subject, up to 12 longitudinal sessions were conducted. We assess the variance of DTI scalars within regions of interest (ROIs) defined by four segmentation methods and investigate the relationships between the variance and covariates, including baseline age, time from the baseline (referred to as \"interval\"), motion, sex, and whether it is the first scan or the second scan in the session.</p><p><strong>Results: </strong>Covariate effects are heterogeneous and bilaterally symmetric across ROIs. Inter-session interval is positively related ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) to FA variance in the cuneus and occipital gyrus, but negatively ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) in the caudate nucleus. Males show significantly ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) higher FA variance in the right putamen, thalamus, body of the corpus callosum, and cingulate gyrus. In 62 out of 176 ROIs defined by the Eve type-1 atlas, an increase in motion is associated ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) with a decrease in FA variance. Head motion increases during the rescan of DTI ( <math><mrow><mi>Δ</mi> <mi>μ</mi> <mo>=</mo> <mn>0.045</mn></mrow> </math> mm per volume).</p><p><strong>Conclusions: </strong>The effects of each covariate on DTI variance and their relationships across ROIs are complex. Ultimately, we encourage researchers to include estimates of variance when sharing data and consider models of heteroscedasticity in analysis. This work provides a foundation for study planning to account for regional variations in metric variance.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044007"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142056920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer enhanced autoencoder rendering cleaning of noisy optical coherence tomography images. 变压器增强型自动编码器渲染噪声光学相干断层扫描图像的净化。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-06-01 Epub Date: 2024-04-30 DOI: 10.1117/1.JMI.11.3.034008
Hanya Ahmed, Qianni Zhang, Robert Donnan, Akram Alomainy
{"title":"Transformer enhanced autoencoder rendering cleaning of noisy optical coherence tomography images.","authors":"Hanya Ahmed, Qianni Zhang, Robert Donnan, Akram Alomainy","doi":"10.1117/1.JMI.11.3.034008","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.034008","url":null,"abstract":"<p><strong>Purpose: </strong>Optical coherence tomography (OCT) is an emerging imaging tool in healthcare with common applications in ophthalmology for detection of retinal diseases, as well as other medical domains. The noise in OCT images presents a great challenge as it hinders the clinician's ability to diagnosis in extensive detail.</p><p><strong>Approach: </strong>In this work, a region-based, deep-learning, denoising framework is proposed for adaptive cleaning of noisy OCT-acquired images. The core of the framework is a hybrid deep-learning model named transformer enhanced autoencoder rendering (TEAR). Attention gates are utilized to ensure focus on denoising the foreground and to remove the background. TEAR is designed to remove the different types of noise artifacts commonly present in OCT images and to enhance the visual quality.</p><p><strong>Results: </strong>Extensive quantitative evaluations are performed to evaluate the performance of TEAR and compare it against both deep-learning and traditional state-of-the-art denoising algorithms. The proposed method improved the peak signal-to-noise ratio to 27.9 dB, CNR to 6.3 dB, SSIM to 0.9, and equivalent number of looks to 120.8 dB for a dental dataset. For a retinal dataset, the performance metrics in the same sequence are: 24.6, 14.2, 0.64, and 1038.7 dB, respectively.</p><p><strong>Conclusions: </strong>The results show that the approach verifiably removes speckle noise and achieves superior quality over several well-known denoisers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034008"},"PeriodicalIF":2.4,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11058346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-source graphical user interface for the creation of synthetic skeletons for medical image analysis. 用于创建医学图像分析合成骨架的开源图形用户界面。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-14 DOI: 10.1117/1.JMI.11.3.036001
Christian Herz, Nicolas Vergnet, Sijie Tian, Abdullah H Aly, Matthew A Jolley, Nathanael Tran, Gabriel Arenas, Andras Lasso, Nadav Schwartz, Kathleen E O'Neill, Paul A Yushkevich, Alison M Pouch
{"title":"Open-source graphical user interface for the creation of synthetic skeletons for medical image analysis.","authors":"Christian Herz, Nicolas Vergnet, Sijie Tian, Abdullah H Aly, Matthew A Jolley, Nathanael Tran, Gabriel Arenas, Andras Lasso, Nadav Schwartz, Kathleen E O'Neill, Paul A Yushkevich, Alison M Pouch","doi":"10.1117/1.JMI.11.3.036001","DOIUrl":"10.1117/1.JMI.11.3.036001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable medial modeling is an inverse skeletonization approach to representing anatomy in medical images, which can be used for statistical shape analysis and assessment of patient-specific anatomical features such as locally varying thickness. It involves deforming a pre-defined synthetic skeleton, or template, to anatomical structures of the same class. The lack of software for creating such skeletons has been a limitation to more widespread use of deformable medial modeling. Therefore, the objective of this work is to present an open-source user interface (UI) for the creation of synthetic skeletons for a range of medial modeling applications in medical imaging.</p><p><strong>Approach: </strong>A UI for interactive design of synthetic skeletons was implemented in 3D Slicer, an open-source medical image analysis application. The steps in synthetic skeleton design include importation and skeletonization of a 3D segmentation, followed by interactive 3D point placement and triangulation of the medial surface such that the desired branching configuration of the anatomical structure's medial axis is achieved. Synthetic skeleton design was evaluated in five clinical applications. Compatibility of the synthetic skeletons with open-source software for deformable medial modeling was tested, and representational accuracy of the deformed medial models was evaluated.</p><p><strong>Results: </strong>Three users designed synthetic skeletons of anatomies with various topologies: the placenta, aortic root wall, mitral valve, cardiac ventricles, and the uterus. The skeletons were compatible with skeleton-first and boundary-first software for deformable medial modeling. The fitted medial models achieved good representational accuracy with respect to the 3D segmentations from which the synthetic skeletons were generated.</p><p><strong>Conclusions: </strong>Synthetic skeleton design has been a practical challenge in leveraging deformable medial modeling for new clinical applications. This work demonstrates an open-source UI for user-friendly design of synthetic skeletons for anatomies with a wide range of topologies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"036001"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fiberscopic pattern removal for optimal coverage in 3D bladder reconstructions of fiberscope cystoscopy videos. 在纤维膀胱镜检查视频的三维膀胱重建中去除纤维图案以实现最佳覆盖。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-17 DOI: 10.1117/1.JMI.11.3.034002
Rachel Eimen, Halina Krzyzanowska, Kristen R Scarpato, Audrey K Bowden
{"title":"Fiberscopic pattern removal for optimal coverage in 3D bladder reconstructions of fiberscope cystoscopy videos.","authors":"Rachel Eimen, Halina Krzyzanowska, Kristen R Scarpato, Audrey K Bowden","doi":"10.1117/1.JMI.11.3.034002","DOIUrl":"10.1117/1.JMI.11.3.034002","url":null,"abstract":"<p><strong>Purpose: </strong>In the current clinical standard of care, cystoscopic video is not routinely saved because it is cumbersome to review. Instead, clinicians rely on brief procedure notes and still frames to manage bladder pathology. Preserving discarded data via 3D reconstructions, which are convenient to review, has the potential to improve patient care. However, many clinical videos are collected by fiberscopes, which are lower cost but induce a pattern on frames that inhibit 3D reconstruction. The aim of our study is to remove the honeycomb-like pattern present in fiberscope-based cystoscopy videos to improve the quality of 3D bladder reconstructions.</p><p><strong>Approach: </strong>Our study introduces an algorithm that applies a notch filtering mask in the Fourier domain to remove the honeycomb-like pattern from clinical cystoscopy videos collected by fiberscope as a preprocessing step to 3D reconstruction. We produce 3D reconstructions with the video before and after removing the pattern, which we compare with a metric termed the area of reconstruction coverage (<math><mrow><msub><mrow><mi>A</mi></mrow><mrow><mi>RC</mi></mrow></msub></mrow></math>), defined as the surface area (in pixels) of the reconstructed bladder. All statistical analyses use paired <math><mrow><mi>t</mi></mrow></math>-tests.</p><p><strong>Results: </strong>Preprocessing using our method for pattern removal enabled reconstruction for all (<math><mrow><mi>n</mi><mo>=</mo><mn>5</mn></mrow></math>) cystoscopy videos included in the study and produced a statistically significant increase in bladder coverage (<math><mrow><mi>p</mi><mo>=</mo><mn>0.018</mn></mrow></math>).</p><p><strong>Conclusions: </strong>This algorithm for pattern removal increases bladder coverage in 3D reconstructions and automates mask generation and application, which could aid implementation in time-starved clinical environments. The creation and use of 3D reconstructions can improve documentation of cystoscopic findings for future surgical navigation, thus improving patient treatment and outcomes.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034002"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11099938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Networking Science and Technology: Highlights from JMI Issue 3. 网络科学与技术:JMI 第三期要闻。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-06-26 DOI: 10.1117/1.JMI.11.3.030101
Bennett Landman
{"title":"Networking Science and Technology: Highlights from JMI Issue 3.","authors":"Bennett Landman","doi":"10.1117/1.JMI.11.3.030101","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.030101","url":null,"abstract":"<p><p>The editorial introduces JMI Issue 3 Volume 11, looks ahead to SPIE Medical Imaging, and highlights the journal's policy on conference article submission.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"030101"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11200196/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141471596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can processed images be used to determine the modulation transfer function and detective quantum efficiency? 经过处理的图像能否用于确定调制传递函数和探测器的量子效率?
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-31 DOI: 10.1117/1.JMI.11.3.033502
Lisa M Garland, Haechan J Yang, Paul A Picot, Jesse Tanguay, Ian A Cunningham
{"title":"Can processed images be used to determine the modulation transfer function and detective quantum efficiency?","authors":"Lisa M Garland, Haechan J Yang, Paul A Picot, Jesse Tanguay, Ian A Cunningham","doi":"10.1117/1.JMI.11.3.033502","DOIUrl":"10.1117/1.JMI.11.3.033502","url":null,"abstract":"<p><strong>Purpose: </strong>The modulation transfer function (MTF) and detective quantum efficiency (DQE) of x-ray detectors are key Fourier metrics of performance, valid only for linear and shift-invariant (LSI) systems and generally measured following IEC guidelines requiring the use of raw (unprocessed) image data. However, many detectors incorporate processing in the imaging chain that is difficult or impossible to disable, raising questions about the practical relevance of MTF and DQE testing. We investigate the impact of convolution-based embedded processing on MTF and DQE measurements.</p><p><strong>Approach: </strong>We use an impulse-sampled notation, consistent with a cascaded-systems analysis in spatial and spatial-frequency domains to determine the impact of discrete convolution (DC) on measured MTF and DQE following IEC guidelines.</p><p><strong>Results: </strong>We show that digital systems remain LSI if we acknowledge both image pixel values and convolution kernels represent scaled Dirac <math><mrow><mi>δ</mi></mrow></math>-functions with an implied sinc convolution of image data. This enables use of the Fourier transform (FT) to determine impact on presampling MTF and DQE measurements.</p><p><strong>Conclusions: </strong>It is concluded that: (i) the MTF of DC is always an unbounded cosine series; (ii) the slanted-edge method yields the true presampling MTF, even when using processed images, with processing appearing as an analytic filter with cosine-series MTF applied to raw presampling image data; (iii) the DQE is unaffected by discrete-convolution-based processing with a possible exception near zero-points in the presampling MTF; and (iv) the FT of the impulse-sampled notation is equivalent to the <math><mrow><mi>Z</mi></mrow></math> transform of image data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"033502"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140480/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic lesion detection for narrow-band imaging bronchoscopy. 窄带成像支气管镜的自动病灶检测。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-30 DOI: 10.1117/1.JMI.11.3.036002
Vahid Daneshpajooh, Danish Ahmad, Jennifer Toth, Rebecca Bascom, William E Higgins
{"title":"Automatic lesion detection for narrow-band imaging bronchoscopy.","authors":"Vahid Daneshpajooh, Danish Ahmad, Jennifer Toth, Rebecca Bascom, William E Higgins","doi":"10.1117/1.JMI.11.3.036002","DOIUrl":"10.1117/1.JMI.11.3.036002","url":null,"abstract":"<p><strong>Purpose: </strong>Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution.</p><p><strong>Approach: </strong>Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions.</p><p><strong>Results: </strong>Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach.</p><p><strong>Conclusion: </strong>The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"036002"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11138083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信