International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Ultrasam: a foundation model for ultrasound using large open-access segmentation datasets. Ultrasam:使用大型开放获取分割数据集的超声基础模型。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-11 DOI: 10.1007/s11548-025-03517-8
Adrien Meyer, Aditya Murali, Farahdiba Zarin, Didier Mutter, Nicolas Padoy
{"title":"Ultrasam: a foundation model for ultrasound using large open-access segmentation datasets.","authors":"Adrien Meyer, Aditya Murali, Farahdiba Zarin, Didier Mutter, Nicolas Padoy","doi":"10.1007/s11548-025-03517-8","DOIUrl":"https://doi.org/10.1007/s11548-025-03517-8","url":null,"abstract":"<p><strong>Purpose: </strong>Automated ultrasound (US) image analysis remains a longstanding challenge due to anatomical complexity and the scarcity of annotated data. Although large-scale pretraining has improved data efficiency in many visual domains, its impact in US is limited by a pronounced domain shift from other imaging modalities and high variability across clinical applications, such as chest, ovarian, and endoscopic imaging. To address this, we propose UltraSam, a SAM-style model trained on a heterogeneous collection of publicly available segmentation datasets, originally developed in isolation. UltraSam is trained under the prompt-conditioned segmentation paradigm, which eliminates the need for unified labels and enables generalization to a broad range of downstream tasks.</p><p><strong>Methods: </strong>We compile US-43d, a large-scale collection of 43 open-access US datasets comprising over 282,000 images with segmentation masks covering 58 anatomical structures. We explore adaptation and fine-tuning strategies for SAM and systematically evaluate transferability across downstream tasks, comparing against state-of-the-art pretraining methods. We further propose prompted classification, a new use case where object-specific prompts and image features are jointly decoded to improve classification performance.</p><p><strong>Results: </strong>In experiments on three diverse public US datasets, UltraSam outperforms existing SAM variants on prompt-based segmentation and surpasses self-supervised US foundation models on downstream (prompted) classification and instance segmentation tasks.</p><p><strong>Conclusion: </strong>UltraSam demonstrates that SAM-style training on diverse, sparsely annotated US data enables effective generalization across tasks. By unlocking the value of fragmented public datasets, our approach lays the foundation for scalable, real-world US representation learning. We release our code and pretrained models at https://github.com/CAMMA-public/UltraSam and invite the community to further this effort by continuing to contribute high-quality datasets.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-enhanced predictive modeling for treatment duration and personalized treatment planning of cleft lip and palate therapy. 人工智能增强的唇腭裂治疗持续时间预测模型和个性化治疗计划。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-09 DOI: 10.1007/s11548-025-03515-w
Artur Aharonyan, Syed Anwar, HyeRan Choo
{"title":"AI-enhanced predictive modeling for treatment duration and personalized treatment planning of cleft lip and palate therapy.","authors":"Artur Aharonyan, Syed Anwar, HyeRan Choo","doi":"10.1007/s11548-025-03515-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03515-w","url":null,"abstract":"<p><strong>Background: </strong>Alveolar molding plate treatment (AMPT) plays a critical role in preparing neonates with cleft lip and palate (CLP) for the first reconstruction surgery (cleft lip repair). However, determining the number of adjustments to AMPT in near-normalizing cleft deformity prior to surgery is a challenging task, often affecting the treatment duration. This study explores the use of machine learning in predicting treatment duration based on three-dimensional (3D) assessments of the pre-treatment maxillary cleft deformity as part of individualized treatment planning.</p><p><strong>Methods: </strong>Digital 3D models of maxillary arches were collected from 35 infants with unilateral CLP. Key anatomical landmarks were labeled on the models, and the distances between these landmarks were calculated and fed into the model as features. A multi-layer perceptron (MLP) neural network was trained on this data and applied to predict the treatment duration. The model's performance was evaluated using regression metrics such as mean absolute error (MAE), Pearson's correlation, and coefficient of determination (R-squared: R<sup>2</sup>), to assess predictive accuracy.</p><p><strong>Results: </strong>Performance metrics of our model revealed a correlation of 0.96, R<sup>2</sup> of 0.91, and a mean absolute error of 3.03 days. The most significant features influencing the predictions were landmarks around the alveolar gap and distances delineating the overall alveolar gap width.</p><p><strong>Conclusion: </strong>The results suggest that our model can reliably predict the treatment duration required for AMPT in neonates with unilateral CLP with a potential to contribute to developing a fully personalized yet efficient AI-based treatment pipeline.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145024777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization support for remote collaborative aneurysm treatment planning. 可视化支持远程协作动脉瘤治疗计划。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-08 DOI: 10.1007/s11548-025-03508-9
Rebecca Preßler, Monique Meuschke, Bernhard Preim, Kai Lawonn
{"title":"Visualization support for remote collaborative aneurysm treatment planning.","authors":"Rebecca Preßler, Monique Meuschke, Bernhard Preim, Kai Lawonn","doi":"10.1007/s11548-025-03508-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03508-9","url":null,"abstract":"<p><strong>Purpose: </strong>Cerebral aneurysms are blood-filled bulges that form at weak points in blood vessel walls, and their rupture can lead to life-threatening consequences. Given the high risk associated with these aneurysms, thorough examination and analysis are essential for determining appropriate treatment. While existing tools such as ANEULYSIS and its web-based counterpart WEBANEULYSIS provide interactive means for analyzing simulated aneurysm data, they lack support for collaborative analysis, which is crucial for enhancing interpretation and improving treatment decisions in medical team meetings.</p><p><strong>Methods: </strong>To address this limitation, we introduce WEBCOANEULYSIS, a novel collaborative tool for aneurysm data analysis. WEBCOANEULYSIS builds upon the established visualization techniques of WEBANEULYSIS while incorporating innovative collaborative features to facilitate joint analysis and discussion among medical professionals. The tool was evaluated by three physicians and two visualization experts, who assessed its usability, functionality, and effectiveness in supporting collaborative decision-making.</p><p><strong>Results: </strong>The evaluation results were overwhelmingly positive. The physicians particularly appreciated the tool's ability to provide a clear overview of aneurysm data while maintaining ease of use despite its complex functionality. Although minor suggestions for improvement were made, the overall feedback highlighted the benefits of WEBCOANEULYSIS in improving collaborative analysis and treatment planning.</p><p><strong>Conclusion: </strong>WEBCOANEULYSIS enhances aneurysm data analysis by enabling real-time collaboration among medical professionals, thereby supporting more informed treatment decisions. Beyond its primary application in risk analysis and treatment planning, the tool also has potential benefits for patient education and the training of new doctors, making it a valuable addition to the field of medical visualization and decision support systems.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145024765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic data balancing strategy-based Xception-dual-channel LSTM model for laparoscopic cholecystectomy phase recognition. 基于异常双通道LSTM模型的腹腔镜胆囊切除术相位识别。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-07 DOI: 10.1007/s11548-025-03509-8
Mingzhou Liu, Feiya Duan, Lin Ling, Jing Hu, Maogen Ge, Xi Zhang, Shanbao Cai
{"title":"Dynamic data balancing strategy-based Xception-dual-channel LSTM model for laparoscopic cholecystectomy phase recognition.","authors":"Mingzhou Liu, Feiya Duan, Lin Ling, Jing Hu, Maogen Ge, Xi Zhang, Shanbao Cai","doi":"10.1007/s11548-025-03509-8","DOIUrl":"https://doi.org/10.1007/s11548-025-03509-8","url":null,"abstract":"<p><strong>Purpose: </strong>To enhance the temporal feature learning capability of the laparoscopic cholecystectomy phase recognition model and address the class imbalance issue in the training data, this paper proposes an Xception-dual-channel LSTM fusion model based on a dynamic data balancing strategy.</p><p><strong>Methods: </strong>The model dynamically adjusts the undersampling rate for each surgical phase, extracting short video clips from the original data as training samples to balance the data distribution and mitigate biased learning. The Xception model, utilizing depthwise separable convolutions, extracts fundamental visual features frame by frame, which are then passed to a dual-channel LSTM network. This network is composed of a temporal mapping bidirectional LSTM structure and a sequence embedding LSTM structure, both working in parallel. The dual-channel LSTM network models the temporal dependencies between adjacent frames, capturing the contextual temporal information to perceive the dynamic feature changes of the surgical phases. Finally, the surgical phase is determined by combining the prediction scores from both channels.</p><p><strong>Results: </strong>Experimental evaluation on the public dataset Cholec80 demonstrates that the proposed model outperforms traditional single-channel LSTM models. Moreover, compared to the model without the dynamic data balancing strategy, the F1-scores for all surgical phases have been improved.</p><p><strong>Conclusion: </strong>The experimental results validate the effectiveness of this strategy in extracting temporal feature information, alleviating the data class imbalance issue, and enhancing the overall detection performance of the model.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145008564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tactile force sensor based on a modified acoustic reflection principle for intraoperative tumor localization in minimally invasive surgery. 基于改进声反射原理的触觉力传感器用于微创手术术中肿瘤定位。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-06 DOI: 10.1007/s11548-025-03511-0
Hoang-Hiep Ly, Manh-Cuong Bui, Ba-Nghia Mai, Hai-Duong Phung, Huy-Anh Bui, Duc-Tung Ta, Thi-Thoa Mac, Xuan-Thuan Nguyen
{"title":"Tactile force sensor based on a modified acoustic reflection principle for intraoperative tumor localization in minimally invasive surgery.","authors":"Hoang-Hiep Ly, Manh-Cuong Bui, Ba-Nghia Mai, Hai-Duong Phung, Huy-Anh Bui, Duc-Tung Ta, Thi-Thoa Mac, Xuan-Thuan Nguyen","doi":"10.1007/s11548-025-03511-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03511-0","url":null,"abstract":"<p><strong>Purpose: </strong>Localization of abdominal tissue, such as tumors, in minimally invasive surgery (MIS) is crucial but challenging due to the lack of tactile sensation. This study aims to develop a tactile force sensor that provides tactile sensation for surgeons, enabling accurate tumor localization while ensuring surgical safety.</p><p><strong>Methods: </strong>This study proposes an acoustic reflection-based tactile force sensor, with preliminary theoretical analyses and fundamental experiments performed to assess its response to applied forces. Tumor detection using the proposed sensor is evaluated with artificial tissue samples. Furthermore, the sensor is integrated with a simulated robotic system and tested in ex vivo palpation experiments on a pig liver sample with an embedded tumor.</p><p><strong>Results: </strong>The acoustic-based sensor has a simple structure, low fabrication cost, uses biocompatible materials, and is safe for the human body. The sensor could accurately measure forces within a range of 0-5.5 N, achieving a high resolution of 2 mN and an RMSE of 149 mN. Graphical tactile force images effectively identified tumor position and relative size in artificial tissue tests. Moreover, the ex vivo tests on the pig liver demonstrated the robot-integrated sensor's success in detecting the embedded abdominal tissue.</p><p><strong>Conclusions: </strong>The study introduces an acoustic-based tactile force sensor that supports surgeons in ensuring patient safety in MIS. Additionally, the sensor is proven to be a practical integration, and it can enable surgeons to detect the tumor's position and relative size during intraoperative surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145006828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time corneal image segmentation for cataract surgery based on detection framework. 基于检测框架的白内障手术角膜图像实时分割。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-05 DOI: 10.1007/s11548-025-03506-x
Xueyi Shi, Dexun Zhang, Shenwen Liang, Wenjing Meng, Huoling Luo, Tianqiao Zhang
{"title":"Real-time corneal image segmentation for cataract surgery based on detection framework.","authors":"Xueyi Shi, Dexun Zhang, Shenwen Liang, Wenjing Meng, Huoling Luo, Tianqiao Zhang","doi":"10.1007/s11548-025-03506-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03506-x","url":null,"abstract":"<p><strong>Objective: </strong>Cataract surgery is among the most frequently performed procedures worldwide. Accurate, real-time segmentation of the cornea and surgical instruments is vital for intraoperative guidance and surgical education. However, most existing deep learning-based segmentation methods depend on pixel-level annotations, which are time-consuming and limit practical deployment.</p><p><strong>Methods: </strong>We present EllipseNet, an anchor-free framework utilizing ellipse-based modeling for real-time corneal segmentation in cataract surgery. Built upon the Hourglass network for feature extraction, EllipseNet requires only simple rectangular bounding box annotations from users. It then autonomously infers the major and minor axes of the corneal ellipse, generating elliptical bounding boxes that more precisely match corneal shapes.</p><p><strong>Results: </strong>EllipseNet achieves efficient real-time performance by segmenting each image within 42 ms and attaining a Dice accuracy of 95.81%. It delivers segmentation speed nearly three times faster than state-of-the-art models, while maintaining similar accuracy levels.</p><p><strong>Conclusion: </strong>EllipseNet provides rapid and accurate corneal segmentation in real time, significantly reducing annotation workload for practitioners. Its design streamlines the segmentation pipeline, lowering the barrier for clinical application. The source code is publicly available at: https://github.com/shixueyi/corneal-segmentation .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Left Ventricular Motion Analysis Framework for the MATRIX-VT Study. 矩阵- vt研究的左心室运动分析框架。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-05 DOI: 10.1007/s11548-025-03510-1
Christian Janorschke, Sorin S Popescu, Jonas Osburg, Xinyu Lu, Jingyang Xie, Engin Yaman, Christoph Marquetand, Oliver Blanck, Hannes Alessandrini, Achim Schweikard, Roland R Tilz
{"title":"Left Ventricular Motion Analysis Framework for the MATRIX-VT Study.","authors":"Christian Janorschke, Sorin S Popescu, Jonas Osburg, Xinyu Lu, Jingyang Xie, Engin Yaman, Christoph Marquetand, Oliver Blanck, Hannes Alessandrini, Achim Schweikard, Roland R Tilz","doi":"10.1007/s11548-025-03510-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03510-1","url":null,"abstract":"<p><strong>Purpose: </strong>Ultrasound (US) is commonly used to assess left ventricular motion for examination of heart function. In stereotactic arrhythmia radioablation (STAR) therapy, managing cardiorespiratory motion during radiation delivery requires representation of motion information in computed tomography (CT) coordinates. Similar to conventional US-guided navigation during surgical procedures, 3D US can provide real-time motion data of the radiation target that could be transferred to CT coordinates and then be accounted for by the radiation system. A motion analysis framework is presented that covers all necessary components to capture and analyse US motion data and transfer it to CT coordinates.</p><p><strong>Methods: </strong>Utilizing a robotic test set-up with a human phantom, a baseline and ground truth dataset is recorded for the development and implementation of the motion analysis framework. An optical tracking system and an additional spatial calibration phantom are used to determine necessary transformations. Methods for frame matching, calibration, registration and evaluation are implemented.</p><p><strong>Results: </strong>The hardware set-up meets all requirements, including a frame rate exceeding 20 Hz and acceptable image quality, while involving only a few components that can easily be mounted and dismantled in a clinical context.The recorded phantom dataset meets all hardware-specific requirements including a frame rate exceeding 20 Hz, an offset between CT trigger time and the closest US recording of 2-20 ms as well as acceptable US image quality. The static phantom allows for quantitative evaluation by matching structures from different US frames in CT coordinates. While each individual step of the US and CT fusion process achieves the target accuracy of less than 5 mm error, the cumulative error over all transformations exceeded this limit for extreme probe positions.</p><p><strong>Conclusion: </strong>The framework is developed and tested for the MATRIX-VT study and can be utilized for patient data evaluation as well as for transferring information such as positional data of moving anatomical structures between US and CTpredictive motion management in STAR therapy. Its modular design allows for the incorporation of advanced calibration and registration methods to address probe positioning limitations, thereby enhancing overall system performance for future applications.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145006901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust sampling technique for realistic distribution simulation in federated learning. 一种用于联邦学习中真实分布模拟的鲁棒抽样技术。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-02 DOI: 10.1007/s11548-025-03504-z
Robin Hoepp, Leonhard Rist, Alexander Katzmann, Raghavan Ashok, Andreas Wimmer, Michael Sühling, Andreas Maier
{"title":"A robust sampling technique for realistic distribution simulation in federated learning.","authors":"Robin Hoepp, Leonhard Rist, Alexander Katzmann, Raghavan Ashok, Andreas Wimmer, Michael Sühling, Andreas Maier","doi":"10.1007/s11548-025-03504-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03504-z","url":null,"abstract":"<p><strong>Purpose: </strong>Federated Learning helps training deep learning networks with diverse data from different locations, particularly in restricted clinical settings. However, label distributions overlapping only partially across clients, due to different demographics, may significantly harm the global training, and thus local model performance. Investigating such effects before rolling out large-scale Federated Learning setups requires proper sampling of the expected label distributions.</p><p><strong>Methods: </strong>We present a sampling algorithm to build data subsets according to desired mean and standard deviations from an initial global distribution. To this end, we incorporate the chi-squared and Gini impurity measures to numerically optimize label distributions for multiple groups in an efficient fashion.</p><p><strong>Results: </strong>Using a real-world application scenario, we sample train and test groups according to region-specific distributions for 3D camera-based weight and height estimation in a clinical context, comparing a hard data split serving as a baseline with our proposed sampling technique. We train a baseline model on all data for comparison and use Federated Averaging to combine the training of our data subsets, demonstrating a realistic deterioration of 25.3 % on weight and 28.7 % on height estimations by the global model.</p><p><strong>Conclusions: </strong>Realistically client-biased label distribution can notably harm the training in a federated context. Our sampling algorithm for simulating realistic data distributions opens up an efficient way for prior analysis of this effect. The technique is agnostic to the chosen network architecture and target scenario and can be adapted to any feature or label problem with non-IID subpopulations.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144977775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic data generation with Worley-Perlin diffusion for robust subarachnoid hemorrhage detection in imbalanced CT Datasets. 基于Worley-Perlin扩散的合成数据生成用于不平衡CT数据集的蛛网膜下腔出血检测。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-02 DOI: 10.1007/s11548-025-03482-2
Zhongyang Lu, Tao Hu, Masahiro Oda, Yutaro Fuse, Ryuta Saito, Masahiro Jinzaki, Kensaku Mori
{"title":"Synthetic data generation with Worley-Perlin diffusion for robust subarachnoid hemorrhage detection in imbalanced CT Datasets.","authors":"Zhongyang Lu, Tao Hu, Masahiro Oda, Yutaro Fuse, Ryuta Saito, Masahiro Jinzaki, Kensaku Mori","doi":"10.1007/s11548-025-03482-2","DOIUrl":"https://doi.org/10.1007/s11548-025-03482-2","url":null,"abstract":"<p><strong>Purpose: </strong>In this paper, we propose a novel generative model to produce high-quality SAH samples, enhancing SAH CT detection performance in imbalanced datasets. Previous methods, such as cost-sensitive learning and previous diffusion models, suffer from overfitting or noise-induced distortion, limiting their effectiveness. Accurate SAH sample generation is crucial for better detection.</p><p><strong>Methods: </strong>We propose the Worley-Perlin Diffusion Model (WPDM), leveraging Worley-Perlin noise to synthesize diverse, high-quality SAH images. WPDM addresses limitations of Gaussian noise (homogeneity) and Simplex noise (distortion), enhancing robustness for generating SAH images. Additionally, <math><msub><mtext>WPDM</mtext> <mtext>Fast</mtext></msub> </math> optimizes generation speed without compromising quality.</p><p><strong>Results: </strong>WPDM effectively improved classification accuracy in datasets with varying imbalance ratios. Notably, a classifier trained with WPDM-generated samples achieved an F1-score of 0.857 on a 1:36 imbalance ratio, surpassing the state of the art by 2.3 percentage points.</p><p><strong>Conclusion: </strong>WPDM overcomes the limitations of Gaussian and Simplex noise-based models, generating high-quality, realistic SAH images. It significantly enhances classification performance in imbalanced settings, providing a robust solution for SAH CT detection.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144977841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging the gap between models and reality: development of a research environment for an object-oriented hospital information system to integrate artificial intelligence and robotics into clinical practice. 弥合模型与现实之间的差距:面向对象的医院信息系统研究环境的开发,将人工智能和机器人技术集成到临床实践中。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-07-03 DOI: 10.1007/s11548-025-03470-6
Sidra Rashid, Lukas Bernhard, Sonja Stabenow, Emily Spicker, Charlotte Haid, Carl König, Hedi Louise Kramer, Sandro Pischinger, Daniel Schade, Johannes Fottner, Dirk Wilhelm, Maximilian Berlet
{"title":"Bridging the gap between models and reality: development of a research environment for an object-oriented hospital information system to integrate artificial intelligence and robotics into clinical practice.","authors":"Sidra Rashid, Lukas Bernhard, Sonja Stabenow, Emily Spicker, Charlotte Haid, Carl König, Hedi Louise Kramer, Sandro Pischinger, Daniel Schade, Johannes Fottner, Dirk Wilhelm, Maximilian Berlet","doi":"10.1007/s11548-025-03470-6","DOIUrl":"10.1007/s11548-025-03470-6","url":null,"abstract":"<p><strong>Introduction: </strong>Hospital information systems (HISs) are the main access opportunity for medical professionals to computer-based patient administration. However, current HISs are primarily designed to function as office applications rather than as comprehensive management and supporting tools. Due to their inflexible architecture, integrating modern technologies like artificial intelligence (AI) models and medical robotics (MR) is hindered. Therefore, we have conceptualized an object-oriented HIS (oHIS) as a pragmatic digital twin (PDT) of the entire patient care in a hospital and developed a functional research framework software for further investigations to bridge the gap between reality and models via oHIS.</p><p><strong>Material and methods: </strong>In an interdisciplinary team of engineers and physicians, we conducted a requirements assessment on the surgical wards of the TUM University Hospital in Munich. Then, we designed the research framework named OMNI-SYS and developed it into a functional research platform capable of bridging the gap between a model management system and real-world agents. Finally, we evaluated the framework simulating a clinical use case.</p><p><strong>Results: </strong>Our analysis revealed that future-proof HIS is an under-researched topic. The integration of new technologies into clinical practice is not sufficiently prepared. Therefore, our approach could solve this shortcoming allowing for human agents, devices, models, and robots to interact in a PDT. Models can be integrated as quasi-natural objects and interact with representations of tangible objects in real time. This approach enables even the integration of new technologies that are still unimaginable today. Our oHIS research framework enabled a functional object representation in a simulated use case.</p><p><strong>Conclusion: </strong>oHIS could significantly facilitate the integration of future technologies like AI models and MR. The OMNI-SYS framework could serve as a cornerstone for further research into this new approach. Studies on its clinical application and formalization are already planned in preparation for a possible future standard.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1771-1783"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476447/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144555664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信