International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Real-time ultrasound AR 3D visualization toward better topological structure perception for hepatobiliary surgery. 实时超声 AR 三维可视化为肝胆外科手术提供更好的拓扑结构感知。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-10-14 DOI: 10.1007/s11548-024-03273-1
Yuqi Ji, Tianqi Huang, Yutong Wu, Ruiyang Li, Pengfei Wang, Jiahong Dong, Honegen Liao
{"title":"Real-time ultrasound AR 3D visualization toward better topological structure perception for hepatobiliary surgery.","authors":"Yuqi Ji, Tianqi Huang, Yutong Wu, Ruiyang Li, Pengfei Wang, Jiahong Dong, Honegen Liao","doi":"10.1007/s11548-024-03273-1","DOIUrl":"10.1007/s11548-024-03273-1","url":null,"abstract":"<p><strong>Purpose: </strong>Ultrasound serves as a crucial intraoperative imaging tool for hepatobiliary surgeons, enabling the identification of complex anatomical structures like blood vessels, bile ducts, and lesions. However, the reliance on manual mental reconstruction of 3D topologies from 2D ultrasound images presents significant challenges, leading to a pressing need for tools to assist surgeons with real-time identification of 3D topological anatomy.</p><p><strong>Methods: </strong>We propose a real-time ultrasound AR 3D visualization method for intraoperative 2D ultrasound imaging. Our system leverages backward alpha blending to integrate multi-planar ultrasound data effectively. To ensure continuity between 2D ultrasound planes, we employ spatial smoothing techniques to interpolate the widely spaced ultrasound planes. A dynamic 3D transfer function is also developed to enhance spatial representation through color differentiation.</p><p><strong>Results: </strong>Comparative experiments involving our AR visualization of 3D ultrasound, alongside AR visualization of 2D ultrasound and 2D visualization of 3D ultrasound, demonstrated that the proposed method significantly reduced operational time(110.25 ± 27.83 s compared to 292 ± 146.63 s and 365.25 ± 131.62 s), improved depth perception and comprehension of complex topologies, contributing to reduced pressure and increased personal satisfaction among users.</p><p><strong>Conclusion: </strong>Quantitative experimental results and feedback from both novice and experienced physicians highlight our system's exceptional ability to enhance the understanding of complex topological anatomy. This improvement is crucial for accurate ultrasound diagnosis and informed surgical decision-making, underscoring the system's clinical applicability.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"613-623"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for three-dimensional statistical shape modeling of the proximal femur in Legg-Calvé-Perthes disease. Legg-Calvé-Perthes 病股骨近端三维统计形状建模框架。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-10-08 DOI: 10.1007/s11548-024-03272-2
Luke G Johnson, Joseph D Mozingo, Penny R Atkins, Seaton Schwab, Alan Morris, Shireen Y Elhabian, David R Wilson, Harry K W Kim, Andrew E Anderson
{"title":"A framework for three-dimensional statistical shape modeling of the proximal femur in Legg-Calvé-Perthes disease.","authors":"Luke G Johnson, Joseph D Mozingo, Penny R Atkins, Seaton Schwab, Alan Morris, Shireen Y Elhabian, David R Wilson, Harry K W Kim, Andrew E Anderson","doi":"10.1007/s11548-024-03272-2","DOIUrl":"10.1007/s11548-024-03272-2","url":null,"abstract":"<p><strong>Purpose: </strong>The pathomorphology of Legg-Calvé-Perthes disease (LCPD) is a key contributor to poor long-term outcomes such as hip pain, femoroacetabular impingement, and early-onset osteoarthritis. Plain radiographs, commonly used for research and in the clinic, cannot accurately represent the full extent of LCPD deformity. The purpose of this study was to develop and evaluate a methodological framework for three-dimensional (3D) statistical shape modeling (SSM) of the proximal femur in LCPD.</p><p><strong>Methods: </strong>We developed a framework consisting of three core steps: segmentation, surface mesh preparation, and particle-based correspondence. The framework aims to address challenges in modeling this rare condition, characterized by highly heterogeneous deformities across a wide age range and small sample sizes. We evaluated this framework by producing a SSM from clinical magnetic resonance images of 13 proximal femurs with LCPD deformity from 11 patients between the ages of six and 12 years.</p><p><strong>Results: </strong>After removing differences in scale and pose, the dominant shape modes described morphological features characteristic of LCPD, including a broad and flat femoral head, high-riding greater trochanter, and reduced neck-shaft angle. The first four shape modes were chosen for the evaluation of the model's performance, together describing 87.5% of the overall cohort variance. The SSM was generalizable to unfamiliar examples with an average point-to-point reconstruction error below 1mm. We observed strong Spearman rank correlations (up to 0.79) between some shape modes, 3D measurements of femoral head asphericity, and clinical radiographic metrics.</p><p><strong>Conclusion: </strong>In this study, we present a framework, based on SSM, for the objective description of LCPD deformity in three dimensions. Our methods can accurately describe overall shape variation using a small number of parameters, and are a step toward a widely accepted, objective 3D quantification of LCPD deformity.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"569-578"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11930624/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Central command suite: futureproofing next-generation surgical environments to embrace the digital operating room. 中央指挥套件:面向未来的新一代手术环境,拥抱数字化手术室。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-12-21 DOI: 10.1007/s11548-024-03278-w
Carlos L Amato, Chengyuan Yang, Niloofar Badihi, Lukas Bernhard, Osman Ratib, Barbara Seeliger, Dirk Wilhelm
{"title":"Central command suite: futureproofing next-generation surgical environments to embrace the digital operating room.","authors":"Carlos L Amato, Chengyuan Yang, Niloofar Badihi, Lukas Bernhard, Osman Ratib, Barbara Seeliger, Dirk Wilhelm","doi":"10.1007/s11548-024-03278-w","DOIUrl":"10.1007/s11548-024-03278-w","url":null,"abstract":"","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"507-512"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929672/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validity of a virtual reality-based straight coloanal anastomosis simulator. 基于虚拟现实的直结肠吻合术模拟器的有效性。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-11-21 DOI: 10.1007/s11548-024-03291-z
George Westergaard, Alexis Desir, Jacob Barker, Tansel Halic, Shruti Hegde, Amr Al Abbas, Javier Salgado Pogacnik, James W Fleshman, Ganesh Sankaranarayanan, Suvranu De, Doga Demirel
{"title":"Validity of a virtual reality-based straight coloanal anastomosis simulator.","authors":"George Westergaard, Alexis Desir, Jacob Barker, Tansel Halic, Shruti Hegde, Amr Al Abbas, Javier Salgado Pogacnik, James W Fleshman, Ganesh Sankaranarayanan, Suvranu De, Doga Demirel","doi":"10.1007/s11548-024-03291-z","DOIUrl":"10.1007/s11548-024-03291-z","url":null,"abstract":"<p><strong>Purpose: </strong>Current training methods for surgical trainees are inadequate because they are costly, low-fidelity, or have a low skill ceiling. This work aims to expand available virtual reality training options by developing a VR trainer for straight coloanal anastomosis (SCA), one of the Colorectal Objective Structured Assessment of Technical Skills (COSATS) tasks.</p><p><strong>Methods: </strong>We developed a VR-based SCA simulator to evaluate trainees based on their performance. To increase the immersiveness, alongside the VR headset, we used haptics as the primary method of interaction with the simulation. We also implemented objective performance metrics to evaluate trainee performance throughout the simulation.</p><p><strong>Results: </strong>We presented our performance metrics to 27 participants for an Expert Consensus Survey (5-point Likert scale) and created weights for our metrics. The weighted average scores for the 24 task-specific metrics ranged from 3.5 to 5. Additionally, for the general metrics, the scores spanned from 3.3 to 4.6. In the second phase of our study, we conducted a study with 16 participants (novice n = 9, expert n = 7). Based on the performance, experts outperformed novices by 8.56% when referring to the total score (p = 0.0041). Three of the measurable metrics, purse suture (p = 0.0797), retracting the anvil (p = 0.0738), and inserting the colonoscope (p = 0.0738) showed a significant difference between experts and novices. Experts were smoother with their hand motions by 3.67% per second and took 70.77% longer paths to complete the same tasks.</p><p><strong>Conclusion: </strong>We created a high-fidelity coloanal anastomosis VR simulator. The simulator runs in real-time while allowing high immersion with a VR headset, deformable bodies, and a haptic device while providing objective feedback through performance metrics. Experts obtained higher scores throughout the simulation, including the quiz to demonstrate procedural knowledge, the metrics to demonstrate experience in steps/procedure, and control of their basic surgical skills and hand movements.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"545-560"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11930611/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-guided erasing for enhanced transfer learning in breast abnormality classification. 注意引导擦除增强乳房异常分类中的迁移学习。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2025-01-15 DOI: 10.1007/s11548-024-03317-6
Adarsh Bhandary Panambur, Sheethal Bhat, Hui Yu, Prathmesh Madhu, Siming Bayer, Andreas Maier
{"title":"Attention-guided erasing for enhanced transfer learning in breast abnormality classification.","authors":"Adarsh Bhandary Panambur, Sheethal Bhat, Hui Yu, Prathmesh Madhu, Siming Bayer, Andreas Maier","doi":"10.1007/s11548-024-03317-6","DOIUrl":"10.1007/s11548-024-03317-6","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer remains one of the most prevalent cancers globally, necessitating effective early screening and diagnosis. This study investigates the effectiveness and generalizability of our recently proposed data augmentation technique, attention-guided erasing (AGE), across various transfer learning classification tasks for breast abnormality classification in mammography.</p><p><strong>Methods: </strong>AGE utilizes attention head visualizations from DINO self-supervised pretraining to weakly localize regions of interest (ROI) in images. These localizations are then used to stochastically erase non-essential background information from training images during transfer learning. Our research evaluates AGE across two image-level and three patch-level classification tasks. The image-level tasks involve breast density categorization in digital mammography (DM) and malignancy classification in contrast-enhanced mammography (CEM). Patch-level tasks include classifying calcifications and masses in scanned film mammography (SFM), as well as malignancy classification of ROIs in CEM.</p><p><strong>Results: </strong>AGE significantly boosts classification performance with statistically significant improvements in mean F1-scores across four tasks compared to baselines. Specifically, for image-level classification of breast density in DM and malignancy in CEM, we achieve gains of 2% and 1.5%, respectively. Additionally, for patch-level classification of calcifications in SFM and CEM ROIs, gains of 0.4% and 0.6% are observed, respectively. However, marginal improvement is noted in the mass classification task, indicating the necessity for further optimization in tasks where critical features may be obscured by erasing techniques.</p><p><strong>Conclusion: </strong>Our findings underscore the potential of AGE, a dataset- and task-specific augmentation strategy powered by self-supervised learning, to enhance the downstream classification performance of DL models, particularly involving ViTs, in medical imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"433-440"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseSeg: joint learning for semantic segmentation and landmark detection using dense image-to-shape representation.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2025-01-23 DOI: 10.1007/s11548-024-03315-8
Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich
{"title":"DenseSeg: joint learning for semantic segmentation and landmark detection using dense image-to-shape representation.","authors":"Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich","doi":"10.1007/s11548-024-03315-8","DOIUrl":"10.1007/s11548-024-03315-8","url":null,"abstract":"<p><strong>Purpose: </strong>Semantic segmentation and landmark detection are fundamental tasks of medical image processing, facilitating further analysis of anatomical objects. Although deep learning-based pixel-wise classification has set a new-state-of-the-art for segmentation, it falls short in landmark detection, a strength of shape-based approaches.</p><p><strong>Methods: </strong>In this work, we propose a dense image-to-shape representation that enables the joint learning of landmarks and semantic segmentation by employing a fully convolutional architecture. Our method intuitively allows the extraction of arbitrary landmarks due to its representation of anatomical correspondences. We benchmark our method against the state-of-the-art for semantic segmentation (nnUNet), a shape-based approach employing geometric deep learning and a convolutional neural network-based method for landmark detection.</p><p><strong>Results: </strong>We evaluate our method on two medical datasets: one common benchmark featuring the lungs, heart, and clavicle from thorax X-rays, and another with 17 different bones in the paediatric wrist. While our method is on par with the landmark detection baseline in the thorax setting (error in mm of <math><mrow><mn>2.6</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> vs. <math><mrow><mn>2.7</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> ), it substantially surpassed it in the more complex wrist setting ( <math><mrow><mn>1.1</mn> <mo>±</mo> <mn>0.6</mn></mrow> </math> vs. <math><mrow><mn>1.9</mn> <mo>±</mo> <mn>0.5</mn></mrow> </math> ).</p><p><strong>Conclusion: </strong>We demonstrate that dense geometric shape representation is beneficial for challenging landmark detection tasks and outperforms previous state-of-the-art using heatmap regression. While it does not require explicit training on the landmarks themselves, allowing for the addition of new landmarks without necessitating retraining.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"441-451"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced self-supervised monocular depth estimation with self-attention and joint depth-pose loss for laparoscopic images. 利用腹腔镜图像的自我关注和联合深度姿态损失,增强自我监督单目深度估计。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-28 DOI: 10.1007/s11548-025-03332-1
Wenda Li, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori
{"title":"Enhanced self-supervised monocular depth estimation with self-attention and joint depth-pose loss for laparoscopic images.","authors":"Wenda Li, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori","doi":"10.1007/s11548-025-03332-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03332-1","url":null,"abstract":"<p><strong>Purpose: </strong>Depth estimation is a powerful tool for navigation in laparoscopic surgery. Previous methods utilize predicted depth maps and the relative poses of the camera to accomplish self-supervised depth estimation. However, the smooth surfaces of organs with textureless regions and the laparoscope's complex rotations make depth and pose estimation difficult in laparoscopic scenes. Therefore, we propose a novel and effective self-supervised monocular depth estimation method with self-attention-guided pose estimation and a joint depth-pose loss function for laparoscopic images.</p><p><strong>Methods: </strong>We extract feature maps and calculate the minimum re-projection error as a feature-metric loss to establish constraints based on feature maps with more meaningful representations. Moreover, we introduce the self-attention block in the pose estimation network to predict rotations and translations of the relative poses. In addition, we minimize the difference between predicted relative poses as the pose loss. We combine all of the losses as a joint depth-pose loss.</p><p><strong>Results: </strong>The proposed method is extensively evaluated using SCARED and Hamlyn datasets. Quantitative results show that the proposed method achieves improvements of about 18.07 <math><mo>%</mo></math> and 14.00 <math><mo>%</mo></math> in the absolute relative error when combining all of the proposed components for depth estimation on SCARED and Hamlyn datasets. The qualitative results show that the proposed method produces smooth depth maps with low error in various laparoscopic scenes. The proposed method also exhibits a trade-off between computational efficiency and performance.</p><p><strong>Conclusion: </strong>This study considers the characteristics of laparoscopic datasets and presents a simple yet effective self-supervised monocular depth estimation. We propose a joint depth-pose loss function based on the extracted feature for depth estimation on laparoscopic images guided by a self-attention block. The experimental results prove that all of the proposed components contribute to the proposed method. Furthermore, the proposed method strikes an efficient balance between computational efficiency and performance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143531055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SfMDiffusion: self-supervised monocular depth estimation in endoscopy based on diffusion models.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-24 DOI: 10.1007/s11548-025-03333-0
Yu Li, Da Chang, Die Luo, Jin Huang, Lan Dong, Du Wang, Liye Mei, Cheng Lei
{"title":"SfMDiffusion: self-supervised monocular depth estimation in endoscopy based on diffusion models.","authors":"Yu Li, Da Chang, Die Luo, Jin Huang, Lan Dong, Du Wang, Liye Mei, Cheng Lei","doi":"10.1007/s11548-025-03333-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03333-0","url":null,"abstract":"<p><strong>Purpose: </strong>In laparoscopic surgery, accurate 3D reconstruction from endoscopic video is crucial for effective image-guided techniques. Current methods for monocular depth estimation (MDE) face challenges in complex surgical scenes, including limited training data, specular reflections, and varying illumination conditions.</p><p><strong>Methods: </strong>We propose SfMDiffusion, a novel diffusion-based self-supervised framework for MDE. Our approach combines: (1) a denoising diffusion process guided by pseudo-ground-truth depth maps, (2) knowledge distillation from a pre-trained teacher model, and (3) discriminative priors to enhance estimation robustness. Our design enables accurate depth estimation without requiring ground-truth depth data during training.</p><p><strong>Results: </strong>Experiments on the SCARED and Hamlyn datasets demonstrate that SfMDiffusion achieves superior performance: an Absolute relative error (Abs Rel) of 0.049, a Squared relative error (Sq Rel) of 0.366, and a Root Mean Square Error (RMSE) of 4.305 on SCARED dataset, and Abs Rel of 0.067, Sq Rel of 0.800, and RMSE of 7.465 on Hamlyn dataset.</p><p><strong>Conclusion: </strong>SfMDiffusion provides an innovative approach for 3D reconstruction in image-guided surgical techniques. Future work will focus on computational optimization and validation across diverse surgical scenarios. Our code is available at https://github.com/Skylanding/SfM-Diffusion .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-dimensional consistency learning between 2D Swin U-Net and 3D U-Net for intestine segmentation from CT volume.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-22 DOI: 10.1007/s11548-024-03252-6
Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Hiroo Uchida, Akinari Hinoki, Kojiro Suzuki, Aitaro Takimoto, Masahiro Oda, Kensaku Mori
{"title":"Multi-dimensional consistency learning between 2D Swin U-Net and 3D U-Net for intestine segmentation from CT volume.","authors":"Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Hiroo Uchida, Akinari Hinoki, Kojiro Suzuki, Aitaro Takimoto, Masahiro Oda, Kensaku Mori","doi":"10.1007/s11548-024-03252-6","DOIUrl":"https://doi.org/10.1007/s11548-024-03252-6","url":null,"abstract":"<p><strong>Purpose: </strong>The paper introduces a novel two-step network based on semi-supervised learning for intestine segmentation from CT volumes. The intestine folds in the abdomen with complex spatial structures and contact with neighboring organs that bring difficulty for accurate segmentation and labeling at the pixel level. We propose a multi-dimensional consistency learning method to reduce the insufficient intestine segmentation results caused by complex structures and the limited labeled dataset.</p><p><strong>Methods: </strong>We designed a two-stage model to segment the intestine. In stage 1, a 2D Swin U-Net is trained using labeled data to generate pseudo-labels for unlabeled data. In stage 2, a 3D U-Net is trained using labeled and unlabeled data to create the final segmentation model. The model comprises two networks from different dimensions, capturing more comprehensive representations of the intestine and potentially enhancing the model's performance in intestine segmentation.</p><p><strong>Results: </strong>We used 59 CT volumes to validate the effectiveness of our method. The experiment was repeated three times getting the average as the final result. Compared to the baseline method, our method improved 3.25% Dice score and 6.84% recall rate.</p><p><strong>Conclusion: </strong>The proposed method is based on semi-supervised learning and involves training both 2D Swin U-Net and 3D U-Net. The method mitigates the impact of limited labeled data and maintains consistncy of multi-dimensional outputs from the two networks to improve the segmentation accuracy. Compared to previous methods, our method demonstrates superior segmentation performance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143477169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TRUSWorthy: toward clinically applicable deep learning for confident detection of prostate cancer in micro-ultrasound.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-20 DOI: 10.1007/s11548-025-03335-y
Mohamed Harmanani, Paul F R Wilson, Minh Nguyen Nhat To, Mahdi Gilany, Amoon Jamzad, Fahimeh Fooladgar, Brian Wodlinger, Purang Abolmaesumi, Parvin Mousavi
{"title":"TRUSWorthy: toward clinically applicable deep learning for confident detection of prostate cancer in micro-ultrasound.","authors":"Mohamed Harmanani, Paul F R Wilson, Minh Nguyen Nhat To, Mahdi Gilany, Amoon Jamzad, Fahimeh Fooladgar, Brian Wodlinger, Purang Abolmaesumi, Parvin Mousavi","doi":"10.1007/s11548-025-03335-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03335-y","url":null,"abstract":"<p><strong>Purpose: </strong>While deep learning methods have shown great promise in improving the effectiveness of prostate cancer (PCa) diagnosis by detecting suspicious lesions from trans-rectal ultrasound (TRUS), they must overcome multiple simultaneous challenges. There is high heterogeneity in tissue appearance, significant class imbalance in favor of benign examples, and scarcity in the number and quality of ground truth annotations available to train models. Failure to address even a single one of these problems can result in unacceptable clinical outcomes.</p><p><strong>Methods: </strong>We propose TRUSWorthy, a carefully designed, tuned, and integrated system for reliable PCa detection. Our pipeline integrates self-supervised learning, multiple-instance learning aggregation using transformers, random-undersampled boosting and ensembling: These address label scarcity, weak labels, class imbalance, and overconfidence, respectively. We train and rigorously evaluate our method using a large, multi-center dataset of micro-ultrasound data.</p><p><strong>Results: </strong>Our method outperforms previous state-of-the-art deep learning methods in terms of accuracy and uncertainty calibration, with AUROC and balanced accuracy scores of 79.9% and 71.5%, respectively. On the top 20% of predictions with the highest confidence, we can achieve a balanced accuracy of up to 91%.</p><p><strong>Conclusion: </strong>The success of TRUSWorthy demonstrates the potential of integrated deep learning solutions to meet clinical needs in a highly challenging deployment setting, and is a significant step toward creating a trustworthy system for computer-assisted PCa diagnosis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信