Journal of Medical Imaging最新文献

筛选
英文 中文
Bidirectional teaching between lightweight multi-view networks for intestine segmentation from CT volume. 基于CT体积的小肠分割轻量级多视图网络双向教学。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-03-31 DOI: 10.1117/1.JMI.12.2.024003
Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Masahiro Oda, Kensaku Mori
{"title":"Bidirectional teaching between lightweight multi-view networks for intestine segmentation from CT volume.","authors":"Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Masahiro Oda, Kensaku Mori","doi":"10.1117/1.JMI.12.2.024003","DOIUrl":"10.1117/1.JMI.12.2.024003","url":null,"abstract":"<p><strong>Purpose: </strong>We present a semi-supervised method for intestine segmentation to assist clinicians in diagnosing intestinal diseases. Accurate segmentation is essential for planning treatments for conditions such as intestinal obstruction. Although fully supervised learning performs well with abundant labeled data, the complexity of the intestine's spatial structure makes labeling time-intensive, resulting in limited labeled data. We propose a 3D segmentation network with a bidirectional teaching strategy to enhance segmentation accuracy using this limited dataset.</p><p><strong>Method: </strong>The proposed semi-supervised method segments the intestine from computed tomography (CT) volumes using bidirectional teaching, where two backbones with different initial weights are trained simultaneously to generate pseudo-labels and employ unlabeled data, mitigating the challenge of limited labeled data. Intestine segmentation is further complicated by complex spatial features. To address this, we propose a lightweight multi-view symmetric network, which uses small-sized convolutional kernels instead of large ones to reduce parameters and capture multi-scale features from diverse perceptual fields, enhancing learning ability.</p><p><strong>Results: </strong>We evaluated the proposed method with 59 CT volumes and repeated all experiments five times. Experimental results showed that the average Dice of the proposed method was 80.45%, the average precision was 84.12%, and the average recall was 78.84%.</p><p><strong>Conclusions: </strong>The proposed method can effectively utilize large-scale unlabeled data with pseudo-labels, which is crucial in reducing the effect of limited labeled data in medical image segmentation. Furthermore, we assign different weights to the pseudo-labels to improve their reliability. From the result, we can see that the method produced competitive performance compared with previous methods.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024003"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11957399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCEF-AVNet: multi-scale feature fusion and attention mechanism-guided brain tumor segmentation network. DCEF-AVNet:多尺度特征融合和注意力机制引导的脑肿瘤分割网络。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-03-20 DOI: 10.1117/1.JMI.12.2.024503
Linlin Wang, Tong Zhang, Chuanyun Wang, Qian Gao, Zhongyi Li, Jing Shao
{"title":"DCEF-AVNet: multi-scale feature fusion and attention mechanism-guided brain tumor segmentation network.","authors":"Linlin Wang, Tong Zhang, Chuanyun Wang, Qian Gao, Zhongyi Li, Jing Shao","doi":"10.1117/1.JMI.12.2.024503","DOIUrl":"10.1117/1.JMI.12.2.024503","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate and efficient automatic segmentation of brain tumors is critical for diagnosis and treatment. However, the diversity in the appearance, location, and shape of brain tumors and their subregions, coupled with complex boundaries, presents significant challenges. We aim to improve segmentation accuracy by addressing limitations in V-Net, including insufficient utilization of multi-scale features and difficulties in managing complex spatial relationships and long-range dependencies.</p><p><strong>Approach: </strong>We propose an improved network structure, dynamic convolution enhanced fusion axial V-Net (DCEF-AVNet), which integrates an enhanced feature fusion module and axial attention mechanisms. The feature fusion module integrates dynamic convolution with a redesigned skip connection strategy to effectively combine multi-scale features, reducing feature inconsistencies and improving representation capability. Axial attention mechanisms are introduced at encoder-decoder connections to manage spatial relationships and alleviate long-range dependency issues. The network was evaluated using the BraTS2021 dataset, with performance measured in terms of Dice coefficients and Hausdorff distances.</p><p><strong>Results: </strong>DCEF-AVNet achieved Dice coefficients of 92.49%, 91.35%, and 91.96% for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) regions, respectively, significantly outperforming baseline methods. The model also demonstrated robust performance across multiple runs, with consistently low standard deviations in metrics.</p><p><strong>Conclusions: </strong>The integration of dynamic convolution, enhanced feature fusion, and axial attention mechanisms enables DCEF-AVNet to deliver superior segmentation accuracy and robustness. These results underscore its potential for advancing automated brain tumor segmentation and improving clinical decision-making.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024503"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11925075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WS-SfMLearner: self-supervised monocular depth and ego-motion estimation on surgical videos with unknown camera parameters. WS-SfMLearner:对摄像机参数未知的手术视频进行自监督单目深度和自运动估计。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-04-30 DOI: 10.1117/1.JMI.12.2.025003
Ange Lou, Jack Noble
{"title":"WS-SfMLearner: self-supervised monocular depth and ego-motion estimation on surgical videos with unknown camera parameters.","authors":"Ange Lou, Jack Noble","doi":"10.1117/1.JMI.12.2.025003","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.025003","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate depth estimation in surgical videos is a pivotal component of numerous image-guided surgery procedures. However, creating ground truth depth maps for surgical videos is often infeasible due to challenges such as inconsistent illumination and sensor noise. As a result, self-supervised depth and ego-motion estimation frameworks are gaining traction, eliminating the need for manually annotated depth maps. Despite the progress, current self-supervised methods still rely on known camera intrinsic parameters, which are frequently unavailable or unrecorded in surgical environments. We address this gap by introducing a self-supervised system capable of jointly predicting depth maps, camera poses, and intrinsic parameters, providing a comprehensive solution for depth estimation under such constraints.</p><p><strong>Approach: </strong>We developed a self-supervised depth and ego-motion estimation framework, incorporating a cost volume-based auxiliary supervision module. This module provides additional supervision for predicting camera intrinsic parameters, allowing for robust estimation even without predefined intrinsics. The system was rigorously evaluated on a public dataset to assess its effectiveness in simultaneously predicting depth, camera pose, and intrinsic parameters.</p><p><strong>Results: </strong>The experimental results demonstrated that the proposed method significantly improved the accuracy of ego-motion and depth prediction, even when compared with methods incorporating known camera intrinsics. In addition, by integrating our cost volume-based supervision, the accuracy of camera parameter estimation, including intrinsic parameters, was further enhanced.</p><p><strong>Conclusions: </strong>We present a self-supervised system for depth, ego-motion, and intrinsic parameter estimation, effectively overcoming the limitations imposed by unknown or missing camera intrinsics. The experimental results confirm that the proposed method outperforms the baseline techniques, offering a robust solution for depth estimation in complex surgical video scenarios, with broader implications for improving image-guided surgery systems.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"025003"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New Growth, New Opportunities. 新的增长,新的机遇。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-04-26 DOI: 10.1117/1.JMI.12.2.020101
Bennett A Landman
{"title":"New Growth, New Opportunities.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.12.2.020101","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.020101","url":null,"abstract":"<p><p>JMI Editor-in-Chief Bennett Landman discusses special issues and offers a few thoughts on the use of AI-assisted writing.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"020101"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12032759/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144044551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-contrast computed tomography atlas of healthy pancreas with dense displacement sampling registration. 健康胰腺密集位移采样配准的多层对比计算机断层图谱。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-04-17 DOI: 10.1117/1.JMI.12.2.024006
Yinchi Zhou, Ho Hin Lee, Yucheng Tang, Xin Yu, Qi Yang, Michael E Kim, Lucas W Remedios, Shunxing Bao, Jeffrey M Spraggins, Yuankai Huo, Bennett A Landman
{"title":"Multi-contrast computed tomography atlas of healthy pancreas with dense displacement sampling registration.","authors":"Yinchi Zhou, Ho Hin Lee, Yucheng Tang, Xin Yu, Qi Yang, Michael E Kim, Lucas W Remedios, Shunxing Bao, Jeffrey M Spraggins, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.12.2.024006","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.024006","url":null,"abstract":"<p><strong>Purpose: </strong>Diverse population demographics can lead to substantial variation in the human anatomy. Therefore, standard anatomical atlases are needed for interpreting organ-specific analyses. Among abdominal organs, the pancreas exhibits notable variability in volumetric morphology, shape, and appearance, complicating the generalization of population-wide features. Understanding the common features of a healthy pancreas is crucial for identifying biomarkers and diagnosing pancreatic diseases.</p><p><strong>Approach: </strong>We propose a high-resolution CT atlas framework optimized for the healthy pancreas. We introduce a deep-learning-based preprocessing technique to extract abdominal ROIs and leverage a hierarchical registration pipeline to align pancreatic anatomy across populations. Briefly, DEEDS affine and non-rigid registration techniques are employed to transfer patient abdominal volumes to a fixed high-resolution atlas template. To generate and evaluate the pancreas atlas, multi-phase contrast CT scans of 443 subjects (aged 15 to 50 years, with no reported history of pancreatic disease) were processed.</p><p><strong>Results: </strong>The two-stage DEEDS affine and non-rigid registration outperforms other state-of-the-art tools, achieving the highest scores for pancreas label transfer across all phases (non-contrast: 0.497, arterial: 0.505, portal venous: 0.494, delayed: 0.497). External evaluation with 100 portal venous scans and 13 labeled abdominal organs shows a mean Dice score of 0.504. The low variance between the pancreases of registered subjects and the obtained pancreas atlas further illustrates the generalizability of the proposed method.</p><p><strong>Conclusion: </strong>We introduce a high-resolution pancreas atlas framework to generalize healthy biomarkers across populations with multi-contrast abdominal CT. The atlases and the associated pancreas organ labels are publicly available through the Human Biomolecular Atlas Program (HuBMAP).</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024006"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12005954/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble of fine-tuned machine learning models for hysterectomy prediction in pregnant women using magnetic resonance images. 使用磁共振图像预测孕妇子宫切除术的微调机器学习模型集合。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-03-18 DOI: 10.1117/1.JMI.12.2.024502
Vishnu Vardhan Reddy Kanamata Reddy, Michael Villordon, Quyen N Do, Yin Xi, Matthew A Lewis, Christina L Herrera, David Owen, Catherine Y Spong, Diane M Twickler, Baowei Fei
{"title":"Ensemble of fine-tuned machine learning models for hysterectomy prediction in pregnant women using magnetic resonance images.","authors":"Vishnu Vardhan Reddy Kanamata Reddy, Michael Villordon, Quyen N Do, Yin Xi, Matthew A Lewis, Christina L Herrera, David Owen, Catherine Y Spong, Diane M Twickler, Baowei Fei","doi":"10.1117/1.JMI.12.2.024502","DOIUrl":"10.1117/1.JMI.12.2.024502","url":null,"abstract":"<p><strong>Purpose: </strong>Identifying pregnant patients at high risk of hysterectomy before giving birth informs clinical management and improves outcomes. We aim to develop machine learning models to predict hysterectomy in pregnant women with placenta accreta spectrum (PAS).</p><p><strong>Approach: </strong>We developed five machine learning models using information from magnetic resonance images and combined them with topographic maps and radiomic features to predict hysterectomy. The models were trained, optimized, and evaluated on data from 241 patients, in groups of 157, 24, and 60 for training, validation, and testing, respectively.</p><p><strong>Results: </strong>We assessed the models individually as well as using an ensemble approach. When these models are combined, the ensembled model produced the best performance and achieved an area under the curve of 0.90, a sensitivity of 90.0%, and a specificity of 90.0% for predicting hysterectomy.</p><p><strong>Conclusions: </strong>Various machine learning models were developed to predict hysterectomy in pregnant women with PAS, which may have potential clinical applications to help improve patient management.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024502"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11915718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143665012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative analysis of nnU-Net and Auto3Dseg for fat and fibroglandular tissue segmentation in MRI. nnU-Net和Auto3Dseg在MRI脂肪和纤维腺组织分割中的比较分析。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-04-16 DOI: 10.1117/1.JMI.12.2.024005
Yasna Forghani, Rafaela Timóteo, Tiago Marques, Nuno Loução, Maria João Cardoso, Fátima Cardoso, Mario Figueiredo, Pedro Gouveia, João Santinha
{"title":"Comparative analysis of nnU-Net and Auto3Dseg for fat and fibroglandular tissue segmentation in MRI.","authors":"Yasna Forghani, Rafaela Timóteo, Tiago Marques, Nuno Loução, Maria João Cardoso, Fátima Cardoso, Mario Figueiredo, Pedro Gouveia, João Santinha","doi":"10.1117/1.JMI.12.2.024005","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.024005","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer, the most common cancer type among women worldwide, requires early detection and accurate diagnosis for improved treatment outcomes. Segmenting fat and fibroglandular tissue (FGT) in magnetic resonance imaging (MRI) is essential for creating volumetric models, enhancing surgical workflow, and improving clinical outcomes. Manual segmentation is time-consuming and subjective, prompting the development of automated deep-learning algorithms to perform this task. However, configuring these algorithms for 3D medical images is challenging due to variations in image features and preprocessing distortions. Automated machine learning (AutoML) frameworks automate model selection, hyperparameter tuning, and architecture optimization, offering a promising solution by reducing reliance on manual intervention and expert knowledge.</p><p><strong>Approach: </strong>We compare nnU-Net and Auto3Dseg, two AutoML frameworks, in segmenting fat and FGT on T1-weighted MRI images from the Duke breast MRI dataset (100 patients). We used threefold cross-validation, employing the Dice similarity coefficient (DSC) and Hausdorff distance (HD) metrics for evaluation. The <math><mrow><mi>F</mi></mrow> </math> -test and Tukey honestly significant difference analysis were used to assess statistical differences across methods.</p><p><strong>Results: </strong>nnU-Net achieved DSC scores of <math><mrow><mn>0.946</mn> <mo>±</mo> <mn>0.026</mn></mrow> </math> (fat) and <math><mrow><mn>0.872</mn> <mo>±</mo> <mn>0.070</mn></mrow> </math> (FGT), whereas Auto3DSeg achieved <math><mrow><mn>0.940</mn> <mo>±</mo> <mn>0.026</mn></mrow> </math> (fat) and <math><mrow><mn>0.871</mn> <mo>±</mo> <mn>0.074</mn></mrow> </math> (FGT). Significant differences in fat HD ( <math><mrow><mi>F</mi> <mo>=</mo> <mn>6.3020</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) originated from the full resolution and the 3D cascade U-Net. No evidence of significant differences was found in FGT HD or DSC metrics.</p><p><strong>Conclusions: </strong>Ensemble approaches of Auto3Dseg and nnU-Net demonstrated comparable performance in segmenting fat and FGT on breast MRI. The significant differences in fat HD underscore the importance of boundary-focused metrics in evaluating segmentation methods.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024005"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12003052/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144021998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of state-of-the-art deep learning models in the segmentation of the left and right ventricles in parasternal short-axis echocardiograms. 评价最先进的深度学习模型在胸骨旁短轴超声心动图左心室和右心室的分割。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-03-26 DOI: 10.1117/1.JMI.12.2.024002
Julian R Cuellar, Vu Dinh, Manjula Burri, Julie Roelandts, James Wendling, Jon D Klingensmith
{"title":"Evaluation of state-of-the-art deep learning models in the segmentation of the left and right ventricles in parasternal short-axis echocardiograms.","authors":"Julian R Cuellar, Vu Dinh, Manjula Burri, Julie Roelandts, James Wendling, Jon D Klingensmith","doi":"10.1117/1.JMI.12.2.024002","DOIUrl":"10.1117/1.JMI.12.2.024002","url":null,"abstract":"<p><strong>Purpose: </strong>Previous studies on echocardiogram segmentation are focused on the left ventricle in parasternal long-axis views. Deep-learning models were evaluated on the segmentation of the ventricles in parasternal short-axis echocardiograms (PSAX-echo). Segmentation of the ventricles in complementary echocardiogram views will allow the computation of important metrics with the potential to aid in diagnosing cardio-pulmonary diseases and other cardiomyopathies. Evaluating state-of-the-art models with small datasets can reveal if they improve performance on limited data.</p><p><strong>Approach: </strong>PSAX-echo was performed on 33 volunteer women. An experienced cardiologist identified end-diastole and end-systole frames from 387 scans, and expert observers manually traced the contours of the cardiac structures. Traced frames were pre-processed and used to create labels to train two domain-specific (Unet-Resnet101 and Unet-ResNet50), and four general-domain [three segment anything (SAM) variants, and the Detectron2] deep-learning models. The performance of the models was evaluated using the Dice similarity coefficient (DSC), Hausdorff distance (HD), and difference in cross-sectional area (DCSA).</p><p><strong>Results: </strong>The Unet-Resnet101 model provided superior performance in the segmentation of the ventricles with 0.83, 4.93 pixels, and <math><mrow><mn>106</mn> <mtext>  </mtext> <msup><mrow><mtext>pixel</mtext></mrow> <mrow><mn>2</mn></mrow> </msup> </mrow> </math> on average for DSC, HD, and DCSA, respectively. A fine-tuned MedSAM model provided a performance of 0.82, 6.66 pixels, and <math><mrow><mn>1252</mn> <mtext>  </mtext> <msup><mrow><mtext>pixel</mtext></mrow> <mrow><mn>2</mn></mrow> </msup> </mrow> </math> , whereas the Detectron2 model provided 0.78, 2.12 pixels, and <math><mrow><mn>116</mn> <mtext>  </mtext> <msup><mrow><mtext>pixel</mtext></mrow> <mrow><mn>2</mn></mrow> </msup> </mrow> </math> for the same metrics, respectively.</p><p><strong>Conclusions: </strong>Deep-learning models are suitable for the segmentation of the left and right ventricles in PSAX-echo. We demonstrated that domain-specific trained models such as Unet-ResNet provide higher accuracy for echo segmentation than general-domain segmentation models when working with small and locally acquired datasets.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024002"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11943840/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143732236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying texture features from structural magnetic resonance imaging scans associated with Tourette's syndrome using machine learning. 使用机器学习识别与图雷特综合症相关的结构磁共振成像扫描的纹理特征。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-02-26 DOI: 10.1117/1.JMI.12.2.026001
Murilo Costa de Barros, Kauê Tartarotti Nepomuceno Duarte, Chia-Jui Hsu, Wang-Tso Lee, Marco Antonio Garcia de Carvalho
{"title":"Identifying texture features from structural magnetic resonance imaging scans associated with Tourette's syndrome using machine learning.","authors":"Murilo Costa de Barros, Kauê Tartarotti Nepomuceno Duarte, Chia-Jui Hsu, Wang-Tso Lee, Marco Antonio Garcia de Carvalho","doi":"10.1117/1.JMI.12.2.026001","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.026001","url":null,"abstract":"<p><strong>Purpose: </strong>Tourette syndrome (TS) is a neurodevelopmental disorder characterized by neurophysiological and neuroanatomical changes, primarily affecting individuals aged 2 to 18. Involuntary motor and vocal tics are common features of this syndrome. Currently, there is no curative therapy for TS, only psychological treatments or medications that temporarily manage the tics. The absence of a definitive diagnostic tool complicates the differentiation of TS from other neurological and psychological conditions.</p><p><strong>Approach: </strong>We aim to enhance the diagnosis of TS through the classification of structural magnetic resonance scans. Our methodology comprises four sequential steps: (1) image acquisition, data were generated for the National Taiwan University, composing images of pediatric magnetic resonance imaging (MRI); (2) pre-processing, involving anatomical structural segmentation using reesurfer software; (3) feature extraction, where texture features in volumetric images are obtained; and (4) image classification, employing support vector machine and naive Bayes classifiers to determine the presence of TS.</p><p><strong>Results: </strong>The analysis indicated significant changes in the regions of the limbic system, such as the thalamus and amygdala, and regions outside the limbic system such as medial orbitofrontal cortex and insula, which are strongly associated with TS.</p><p><strong>Conclusions: </strong>Our findings suggest that texture features derived from sMRI scans can aid in the diagnosis of TS by highlighting critical brain regions involved in the disorder. The proposed method holds promise for improving diagnostic accuracy and understanding the neuroanatomical underpinnings of TS.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"026001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11866941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acoustic reflector-enabled forward-viewing ultrasound image-guided access. 声学反射器支持前视超声图像引导访问。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-03-01 Epub Date: 2025-04-09 DOI: 10.1117/1.JMI.12.2.025002
Yichuan Tang, Ashiqur Rahaman, Araceli B Gonzalez, Issac Abouaf, Aditya Malik, Igor Sorokin, Haichong Zhang
{"title":"Acoustic reflector-enabled forward-viewing ultrasound image-guided access.","authors":"Yichuan Tang, Ashiqur Rahaman, Araceli B Gonzalez, Issac Abouaf, Aditya Malik, Igor Sorokin, Haichong Zhang","doi":"10.1117/1.JMI.12.2.025002","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.025002","url":null,"abstract":"<p><strong>Purpose: </strong>Existing ultrasound (US) image-guided needle access methods applied in various surgical procedures (such as percutaneous nephrolithotomy) face the challenge of keeping the needle tip visible during the insertion process due to the unguaranteed alignment between the US image and needle. We propose a needle insertion mechanism with reflector-integrated US imaging, where the US image plane and the needle are mechanically aligned, and the needle is inserted in a forward-viewing style to provide more intuitive access.</p><p><strong>Approach: </strong>An acoustic reflector is used to redirect the US image plane while the needle goes through the slit in the middle of the acoustic reflector so that the needle path aligns with the US image plane. Both the bracket holding the needle and the acoustic reflector are rotatable to provide clinicians with the flexibility to search for the optimal needle insertion orientation. Effects of the slit in the reflector on the quality of post-reflection ultrasound images were evaluated. Needle tip visibility was evaluated in water and <i>ex vivo</i> beef tissue. Needle access accuracy was evaluated using point targets embedded in gelatin, and errors between the needle tip and point targets are estimated from X-ray images.</p><p><strong>Results: </strong>The slit in the reflector has limited effects on post-reflection image quality. The needle tip was visible in water and in <i>ex vivo</i> tissue, and its visibility was quantified using a signal-to-noise ratio. Needle access results showed an average insertion error of less than 3 mm.</p><p><strong>Conclusions: </strong>Our results demonstrate the clinical potential of the reflector-enabled forward-viewing US image-guided access mechanism.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"025002"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11981581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信