International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Global region reidentification for camera relocalization in video-based surgical navigation. 基于视频的手术导航中摄像机定位的全局区域再识别。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-08 DOI: 10.1007/s11548-026-03650-y
Roger D Soberanis-Mukul, Ryan Chou, Chin Hang Ryan Chan, Jan Emily Mangulabnan, Lalithkumar Seenivasan, Simon Bonaventura Ertlmaier, S Swaroop Vedula, Russell H Taylor, Masaru Ishii, Gregory Hager, Mathias Unberath
{"title":"Global region reidentification for camera relocalization in video-based surgical navigation.","authors":"Roger D Soberanis-Mukul, Ryan Chou, Chin Hang Ryan Chan, Jan Emily Mangulabnan, Lalithkumar Seenivasan, Simon Bonaventura Ertlmaier, S Swaroop Vedula, Russell H Taylor, Masaru Ishii, Gregory Hager, Mathias Unberath","doi":"10.1007/s11548-026-03650-y","DOIUrl":"https://doi.org/10.1007/s11548-026-03650-y","url":null,"abstract":"<p><strong>Purpose: </strong>Vision-based navigation systems rely on the registered camera poses in the CT space to guide surgeons. However, while it is possible to provide an approximate initialization, this registration becomes outdated as the endoscopic camera leaves and reenters the anatomy. Endoscopic camera relocalization is the process of determining the position of an endoscope relative to an anatomical reference after reinsertion. However, accurately reidentifying the global surgical scene and estimating camera pose have proven challenging due to the varying appearance of endoscopic sequences.</p><p><strong>Methods: </strong>We present a training-free approach to accurately reidentify the region of interest (ROI) and estimate the camera position of a query image after reinsertion. This method utilizes previously observed images with known poses and a CT scan. By combining advanced foundation models with classical techniques, we globally reidentify a prior image of the ROI, which is then used for image-based feature matching and pose recovery via the Perspective-n-Point algorithm.</p><p><strong>Results: </strong>We conducted experiments on eight sequences from three cadaver studies. Our results show that our method accurately reidentifies when the endoscope reaches the ROI and identifies suitable image pairs for PnP-based pose estimation. It achieves an average translation error of 1.74 mm and a rotational error of 0.09 radians, making it suitable for reinitialization in image-based navigation without human intervention.</p><p><strong>Conclusion: </strong>Our work presents a training-free approach for detecting when the endoscope reenters the ROI and estimating the camera's pose after reinsertions. The approach demonstrates promising results contributing toward enabling pose reinitialization for vision-based surgical applications.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning where to look: scaling parkland grade prediction from surgical videos. 学习看哪里:从手术视频中缩放公园等级预测。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-08 DOI: 10.1007/s11548-026-03691-3
Sreeram Kamabattula, Sue Kulason, Busisiwe Mlambo, Lilia Purvis, Kiran Bhattacharyya
{"title":"Learning where to look: scaling parkland grade prediction from surgical videos.","authors":"Sreeram Kamabattula, Sue Kulason, Busisiwe Mlambo, Lilia Purvis, Kiran Bhattacharyya","doi":"10.1007/s11548-026-03691-3","DOIUrl":"https://doi.org/10.1007/s11548-026-03691-3","url":null,"abstract":"<p><strong>Purpose: </strong>The Parkland Grading Scale (PGS) is widely used to quantify operative difficulty in cholecystectomy, with higher grades associated with worse post-operative outcomes. However, consistent, scalable PGS assessment is limited by the reliance on two manual steps: determining where to look in the surgical video for key evidence, and assigning a grade. Previous machine learning approaches have either depended on manual selection of where to look, or approximated it with fixed-duration video segments, leaving it unclear whether models can accurately predict PGS without explicit guidance on where to look.</p><p><strong>Methods: </strong>To address this, we evaluate 287 robotic cholecystectomy videos annotated with PGS and a standardized key-segment. Using a temporal convolution network and attention-based framework, we compare the performance of a fully automated model using full surgical videos without key-segment supervision to a model provided with the key-segment (where to look).</p><p><strong>Results: </strong>Providing the key-segment yields substantial performance gains (weighted F1 +0.25 and Krippendorff's <math><mi>α</mi></math> (KA) +0.29). We further introduce ParkNet <math><mmultiscripts><mrow></mrow> <mrow><mi>LEARN</mi></mrow> <mrow></mrow></mmultiscripts> </math> , which learns to where to look and predicts PGS from full surgical videos, achieving significant improvements over the no-supervision automation (weighted F1 +0.18 and KA +0.23), and a KA = 0.60-within 0.06 of the model with key-segment provided.</p><p><strong>Conclusion: </strong>These findings highlight the importance of attending to where to look for automating operative difficulty assessment, and is a valuable step toward supporting large-scale research on surgical performance and post-operative outcomes.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cobotic drilling assistant for orthopedic surgery using Gaussian process-based breakthrough detection. 基于高斯过程突破检测的骨科Cobotic钻孔助手。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-08 DOI: 10.1007/s11548-026-03679-z
Lucas Gimeno, Nils Johnson, Tobias Stauffer, Quentin Lohmeyer, Mirko Meboldt
{"title":"Cobotic drilling assistant for orthopedic surgery using Gaussian process-based breakthrough detection.","authors":"Lucas Gimeno, Nils Johnson, Tobias Stauffer, Quentin Lohmeyer, Mirko Meboldt","doi":"10.1007/s11548-026-03679-z","DOIUrl":"https://doi.org/10.1007/s11548-026-03679-z","url":null,"abstract":"<p><strong>Purpose: </strong>We introduce a cobot assisted cortical bone drilling, as applied in femoral, or clavicular fractures. This requires high accuracy, as minor deviations can cause complications through soft tissue penetration (STP). Human-robot collaboration (HRC) combines the surgeon's expertise with robotic stability and sensing accuracy. Thereby, the robot assistant passively guides the drill and prevents soft tissue penetration by detecting breakthroughs via integrated f/t and joint velocity sensors and stopping in place.</p><p><strong>Methods: </strong>A robotic arm that passively guides the surgeon during drilling and actively stops upon breakthrough. The detection algorithm is online and model-free, instead relying on online Gaussian process (GP) regression to adapt to varying drilling conditions. The system was evaluated in a user study with an experimental group of N=16 and a control group of N=17 participants, whereby STP was measured using a depth camera.</p><p><strong>Results: </strong>Femoral, ulnar, and generic bone samples were drilled with initially sharp 3.2 mm drill bits. Across all bone types, the experimental group outperformed the control group by 6.79 mm (p< 0.001) and undercut the reported STP for experienced surgeons, 5.1 mm by 2.54 mm (p< 0.02) for the generic bone case, despite the fact that only lay participants were tested in this study. Baseline algorithms such as thresholding were also outperformed by the proposed method.</p><p><strong>Conclusion: </strong>These findings provide evidence toward the potential of HRC and model online breakthrough detection methods to increase safety in orthopedic drilling and represent a promising step toward future clinical implementation and surgical training applications.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147857616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Couinaud segment-aware deep learning on point clouds for major liver resection planning. 基于点云的Couinaud分段感知深度学习用于肝大切除规划。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-08 DOI: 10.1007/s11548-026-03663-7
Joy Rakshit, Janine Rothert, Georg Hille, Tobias Huber, Hauke Lang, Rabea Margies, Florentine Huettl, Sylvia Saalfeld
{"title":"Couinaud segment-aware deep learning on point clouds for major liver resection planning.","authors":"Joy Rakshit, Janine Rothert, Georg Hille, Tobias Huber, Hauke Lang, Rabea Margies, Florentine Huettl, Sylvia Saalfeld","doi":"10.1007/s11548-026-03663-7","DOIUrl":"https://doi.org/10.1007/s11548-026-03663-7","url":null,"abstract":"<p><strong>Purpose: </strong>In this study, we address the problem of automatic liver resection planning for major surgical procedures, including hemi-hepatectomy and extended hemi-hepatectomy, using deep learning. Motivated by clinical practice, where Couinaud liver segments are routinely used to describe tumor location and guide surgical decision-making, we investigate whether incorporating this anatomical information can improve model performance and clinical relevance.</p><p><strong>Methods: </strong>We propose a point cloud-based geometric deep learning approach based on a modified RandLA-Net architecture to predict liver resection zones. The model was trained and evaluated on 70 hemi-hepatectomy cases from Johannes Gutenberg University, Mainz, Germany (internal dataset). Two composite loss functions were evaluated: cross-entropy (CE) combined with intersection over union (IoU) and CE combined with Dice loss. For each loss function, models were trained with and without Couinaud segment information. Generalizability was assessed on an external dataset of 30 hemi-hepatectomy cases from the colorectal liver metastases (CRLM) cohort.</p><p><strong>Results: </strong>Both loss functions achieved comparable performance across the evaluated datasets, with CE <math><mo>+</mo></math> IoU consistently outperforming CE <math><mo>+</mo></math> Dice. On the internal test set, incorporating Couinaud segment information increased the IoU<sub>mean</sub> from 0.787 to 0.804 and the F1-score from 0.864 to 0.870. A Wilcoxon signed-rank test on 15 paired cases confirmed a statistically significant improvement in IoU<sub>mean</sub> (p = 0.030), with 80% of cases showing improvement. On the external dataset, IoU<sub>mean</sub> improved from 0.666 to 0.702 and F1-score from 0.786 to 0.799 when Couinaud information was included. Excluding five anatomically complex cases, a Wilcoxon signed-rank test on the remaining 25 paired cases showed a significant improvement in IoU<sub>mean</sub> (p = 0.019), with 68% of cases demonstrating improved performance.</p><p><strong>Conclusion: </strong>Explicit integration of Couinaud segment information improves both quantitative performance and clinical relevance in automatic major liver resection planning, particularly by better preserving critical vascular structures.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147857683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intraoperative fusion of models and data for robust distance sensing. 术中融合模型和数据用于鲁棒距离传感。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-08 DOI: 10.1007/s11548-026-03615-1
Marius Briel, Ludwig Haide, Tobias Weber, Alain Jungo, Nicola Piccinelli, Gernot Kronreif, Franziska Mathis-Ullrich, Eleonora Tagliabue
{"title":"Intraoperative fusion of models and data for robust distance sensing.","authors":"Marius Briel, Ludwig Haide, Tobias Weber, Alain Jungo, Nicola Piccinelli, Gernot Kronreif, Franziska Mathis-Ullrich, Eleonora Tagliabue","doi":"10.1007/s11548-026-03615-1","DOIUrl":"https://doi.org/10.1007/s11548-026-03615-1","url":null,"abstract":"<p><strong>Purpose: </strong>Instrument-integrated optical sensors are gaining popularity in microsurgery due to their ability to provide accurate measurements of instrument-to-tissue distances, enabling precise instrument control. However, obstructions in the optical path can result in measurement errors. In this work, we propose a method to improve robustness of distance information from sensorized microsurgical instruments.</p><p><strong>Methods: </strong>Our pipeline integrates a rapid search algorithm to identify relevant neighboring data points, as well as geometric and non-geometric techniques to accurately model the local tissue structure. Additionally, we implement a fusion of measurement and model to identify and overcome disturbances, e.g., obstructions from surgical instruments or semantic segmentation errors.</p><p><strong>Results: </strong>Our simulation examines the effect of different modeling parameters and techniques on distance prediction, yielding a mean absolute error of less than 0.02 mm when using the local spline fit. Experiments in ex vivo human eyes show that our pipeline achieves up to 89 % error reduction when compared to sensor only.</p><p><strong>Conclusion: </strong>Our method improves the reliability of instrument-integrated optical sensors. This work could enable distance-based instrument control in challenging conditions, thereby enhancing surgical precision in delicate ophthalmic procedures. Our approach can be generalized to any surgery with sensorized instruments and beyond.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian approach to temporal surgical segmentation model fusion. 基于贝叶斯方法的颞骨外科分割模型融合。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-07 DOI: 10.1007/s11548-026-03686-0
Max Berniker, Sreeram Kamabattula, Kiran Bhattacharyya
{"title":"A Bayesian approach to temporal surgical segmentation model fusion.","authors":"Max Berniker, Sreeram Kamabattula, Kiran Bhattacharyya","doi":"10.1007/s11548-026-03686-0","DOIUrl":"https://doi.org/10.1007/s11548-026-03686-0","url":null,"abstract":"<p><strong>Purpose: </strong>Robotic-assisted surgery (RAS) generates vast amounts of video and robotic data, presenting opportunities for machine learning. Video-based models, in particular, that can temporally segment frames by ontological categories such as procedure type, phase, steps, actions, etc., are needed. Training separate models for each category neglects statistical dependencies between categories and can yield incompatible predictions. Training large multi-category models may help, but increases complexity while reducing model modularity and interpretability.</p><p><strong>Methods: </strong>We present a model fusion alternative: an effectively zero-free-parameter Bayesian model fusion technique. Incorporating the empirical conditional dependencies across categories and time, we combine predictions from multiple segmentation models into one joint Bayesian inference. The result is a Bayes' optimal distribution over all categories evolving over time with accumulated evidence.</p><p><strong>Results: </strong>On a large test set of hundreds of surgical cases, of nearly eight million frames of annotated data, we found that fused predictions from the joint Bayesian model provide clear benefits over the individual models, correcting inconsistent and inaccurate predictions, and even forming accurate beliefs when evidence was absent.</p><p><strong>Conclusion: </strong>The model we present is a lightweight, principled alternative to machine learning-based model fusion. A sufficiently complex model could be trained to produce the same results, but would effectively trade explainable predictions with minimal overheard for computational complexity and transparency. We end by discussing how the same approach can be used to encompass larger more sophisticated models within the same conceptual framework.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spartan: surgical peg-and-ring triplet and workflow anticipation benchmark. 斯巴达:手术钉环三联和工作流程预期基准。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-06 DOI: 10.1007/s11548-026-03688-y
Federico Cunico, Michele Sandrini, Nicola Piccinelli, Riccardo Muradore
{"title":"Spartan: surgical peg-and-ring triplet and workflow anticipation benchmark.","authors":"Federico Cunico, Michele Sandrini, Nicola Piccinelli, Riccardo Muradore","doi":"10.1007/s11548-026-03688-y","DOIUrl":"https://doi.org/10.1007/s11548-026-03688-y","url":null,"abstract":"<p><strong>Purpose: </strong>Automation in robot-assisted surgery (RAS) requires not only accurate scene understanding but also real-time reasoning and action within dynamic surgical workflows. This work introduces SPARTAN: the Surgical Peg-And-Ring Triplet and Workflow ANticipation Benchmark, alongside a unified baseline for real-time surgical workflow analysis, for the first time jointly addressing surgical phase recognition, phase anticipation, and action triplet recognition. This integrated design bridges high-level workflow understanding with fine-grained, robot-action-level perception.</p><p><strong>Methods: </strong>The SPARTAN benchmark is based on a modified Peg-and-Ring training task performed on the da Vinci Research Kit (dVRK), providing frame-level annotations of surgical phases and dual-arm action triplets that delineate initial, intermediate, and final workflow states.</p><p><strong>Results: </strong>We demonstrate that our baseline achieves performance comparable to state-of-the-art methods across all three SPARTAN tasks while operating in real time. The benchmark offers complexity comparable to related datasets in terms of phase structure, number of videos, and triplet diversity, yet remains reproducible and directly applicable to physical robotic systems.</p><p><strong>Conclusion: </strong>SPARTAN provides a practical foundation for developing and evaluating real-time perception and reasoning models in RAS.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subsampled randomized Fourier GaLore for adapting foundation models in depth-driven liver landmark segmentation. 基于次采样随机傅立叶GaLore的深度驱动肝标记分割基础模型。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-06 DOI: 10.1007/s11548-026-03674-4
Yun-Chen Lin, Jiayuan Huang, Hanyuan Zhang, Sergi Kavtaradze, Matthew J Clarkson, Mobarak I Hoque
{"title":"Subsampled randomized Fourier GaLore for adapting foundation models in depth-driven liver landmark segmentation.","authors":"Yun-Chen Lin, Jiayuan Huang, Hanyuan Zhang, Sergi Kavtaradze, Matthew J Clarkson, Mobarak I Hoque","doi":"10.1007/s11548-026-03674-4","DOIUrl":"https://doi.org/10.1007/s11548-026-03674-4","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate detection and delineation of anatomical structures in medical imaging are critical for computer-assisted interventions, particularly in laparoscopic liver surgery where 2D video streams limit depth perception and complicate landmark localization. While recent works have leveraged monocular depth cues for enhanced landmark detection, challenges remain in fusing RGB and depth features and in efficiently adapting large-scale vision models to surgical domains.</p><p><strong>Methods: </strong>We propose a depth-guided segmentation framework integrating semantic and geometric cues via dual foundation encoders: SAM2 for RGB and Depth Anything V2 for depth features. To efficiently adapt SAM2, we introduce SRFT-GaLore, a novel low-rank gradient projection method using Subsampled Randomized Fourier Transform. This enables efficient fine-tuning of high-dimensional attention layers without sacrificing representational power. A cross-attention fusion module further integrates RGB and depth cues. To assess cross-dataset generalization, validated on the public L3D and our new LLSD datasets.</p><p><strong>Results: </strong>On the public L3D dataset, our method achieves a 4.85% improvement in Dice Similarity Coefficient (DSC) and a 11.78-point reduction in Average Symmetric Surface Distance (ASSD) compared to the D2GPLand. To further assess generalization capability, we evaluate our model on LLSD dataset. Our model maintains competitive performance and significantly outperforms SAM-based baselines, demonstrating strong cross-dataset robustness and adaptability to unseen surgical environments.</p><p><strong>Conclusion: </strong>The SRFT-GaLore-enhanced dual-encoder framework enables scalable, precise segmentation in depth-constrained surgical settings. Our findings highlight the potential of foundation model adaptation for real-time computer-assisted interventions. While current cross-modal fusion remains shallow for efficiency, future work will explore transformer-based decoders and deeper attention mechanisms. Ultimately, this research provides a robust foundation for 3D-2D anatomical registration and AR-guided navigation in complex laparoscopic procedures.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TREAT-Netv2: regional wall motion-informed video-tabular fusion for ACS treatment prediction. TREAT-Netv2:区域壁运动信息视频表融合用于ACS治疗预测。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-05 DOI: 10.1007/s11548-026-03652-w
Diane Kim, Victoria Wu, Minh Nguyen Nhat To, Bahar Khodabahkshian, Nima Hashemi, Sherif Abdalla, Teresa S M Tsang, Purang Abolmaesumi, Christina Luong
{"title":"TREAT-Netv2: regional wall motion-informed video-tabular fusion for ACS treatment prediction.","authors":"Diane Kim, Victoria Wu, Minh Nguyen Nhat To, Bahar Khodabahkshian, Nima Hashemi, Sherif Abdalla, Teresa S M Tsang, Purang Abolmaesumi, Christina Luong","doi":"10.1007/s11548-026-03652-w","DOIUrl":"https://doi.org/10.1007/s11548-026-03652-w","url":null,"abstract":"<p><strong>Purpose: </strong>Acute coronary syndrome (ACS) is a major cause of cardiovascular mortality. While coronary angiography enables definitive diagnosis and intervention, its invasiveness and limited availability delay treatment, disproportionately affecting rural and remote communities. Development of noninvasive, predictive tools for early revascularization may improve triage and outcomes.</p><p><strong>Methods: </strong>We propose TREAT-Netv2, a regional wall motion-informed video-tabular fusion network for ACS treatment prediction that integrates echocardiograms (echo) and electronic medical records. The model extracts regional wall motion features from echo sequences and applies the transformer-based multiple instance learning to capture nuanced disease representations. TREAT-Netv2 does not require diagnostic details such as level of occlusion or ACS subtype, eliminating the need for additional procedures and improving its robustness.</p><p><strong>Results: </strong>TREAT-Netv2 achieved an AUROC of 72.5% and balanced accuracy of 68.6%, outperforming unimodal, multimodal, and state-of-the-art baselines. ACS subgroup analysis showed that TREAT-Netv2 achieved the highest accuracy for non-ST-elevated myocardial infarction and unstable angina (NSTEMI/UA) patients, the most clinically challenging cases where the need for invasive intervention is often uncertain.</p><p><strong>Conclusion: </strong>By the complete elimination of ACS-specific diagnostic inputs and incorporation of transformer-based fusion, TREAT-Netv2 enables noninvasive and resource-free ACS risk stratification, particularly in clinically ambiguous cases. Our code will be made publicly available at URL: github.com/DeepRCL/TREAT-Netv2.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-precision label-free virtual H&E staining of 3D holotomography using DAPI-guided conditional diffusion learning. 使用dapi引导条件扩散学习的高精度无标签3D全息断层扫描虚拟H&E染色。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2026-05-05 DOI: 10.1007/s11548-026-03651-x
Taeyoung Bak, Sangwook Kim, Daewoong Ahn, Hyun-Seok Min, Jimin Lee
{"title":"High-precision label-free virtual H&E staining of 3D holotomography using DAPI-guided conditional diffusion learning.","authors":"Taeyoung Bak, Sangwook Kim, Daewoong Ahn, Hyun-Seok Min, Jimin Lee","doi":"10.1007/s11548-026-03651-x","DOIUrl":"https://doi.org/10.1007/s11548-026-03651-x","url":null,"abstract":"<p><strong>Purpose: </strong>Conventional hematoxylin and eosin (H&E) staining is destructive and largely limited to two-dimensional sections. We aimed to develop a practical virtual staining method that produces H&E-like images from label-free three-dimensional holotomography (HT) while preserving nuclear morphology and requiring only HT at inference.</p><p><strong>Methods: </strong>We designed a DAPI-guided conditional diffusion model with a shared encoder and two decoder heads (H&E and DAPI). During training, the model receives HT as condition input and predicts diffusion noise for both H&E and DAPI targets using mean-squared-error objectives. DAPI is used only during training as nucleus-centric guidance. Data were acquired from the same tissue using a sequential protocol (HT imaging, then DAPI imaging, then H&E imaging). Because local nonlinear tissue deformation remains after global affine registration, HT-H&E pairs were treated as weakly paired, while HT-DAPI provided stronger local correspondence.</p><p><strong>Results: </strong>Compared with CycleGAN baselines, the proposed model produced more realistic nuclear morphology and better structural consistency. On held-out test tiles, DAPI-guided diffusion achieved lower FID and KID (FID: 9.2158; KID: <math><mrow><mn>0.0091</mn> <mo>±</mo> <mn>0.0035</mn></mrow> </math> ) than CycleGAN without DAPI (FID: 14.7447; KID: <math><mrow><mn>0.0434</mn> <mo>±</mo> <mn>0.0067</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>Training-only DAPI guidance improves virtual H&E generation from label-free HT without requiring DAPI during inference. This weakly paired training design reduces dependence on expensive pixel-level registration and supports scalable, nondestructive digital histopathology workflows.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书