International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Robust prostate disease classification using transformers with discrete representations. 使用具有离散表示的变换器进行稳健的前列腺疾病分类。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-05-13 DOI: 10.1007/s11548-024-03153-8
Ainkaran Santhirasekaram, Mathias Winkler, Andrea Rockall, Ben Glocker
{"title":"Robust prostate disease classification using transformers with discrete representations.","authors":"Ainkaran Santhirasekaram, Mathias Winkler, Andrea Rockall, Ben Glocker","doi":"10.1007/s11548-024-03153-8","DOIUrl":"10.1007/s11548-024-03153-8","url":null,"abstract":"<p><strong>Purpose: </strong>Automated prostate disease classification on multi-parametric MRI has recently shown promising results with the use of convolutional neural networks (CNNs). The vision transformer (ViT) is a convolutional free architecture which only exploits the self-attention mechanism and has surpassed CNNs in some natural imaging classification tasks. However, these models are not very robust to textural shifts in the input space. In MRI, we often have to deal with textural shift arising from varying acquisition protocols. Here, we focus on the ability of models to generalise well to new magnet strengths for MRI.</p><p><strong>Method: </strong>We propose a new framework to improve the robustness of vision transformer-based models for disease classification by constructing discrete representations of the data using vector quantisation. We sample a subset of the discrete representations to form the input into a transformer-based model. We use cross-attention in our transformer model to combine the discrete representations of T2-weighted and apparent diffusion coefficient (ADC) images.</p><p><strong>Results: </strong>We analyse the robustness of our model by training on a 1.5 T scanner and test on a 3 T scanner and vice versa. Our approach achieves SOTA performance for classification of lesions on prostate MRI and outperforms various other CNN and transformer-based models in terms of robustness to domain shift and perturbations in the input space.</p><p><strong>Conclusion: </strong>We develop a method to improve the robustness of transformer-based disease classification of prostate lesions on MRI using discrete representations of the T2-weighted and ADC images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"11-20"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759462/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140916593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
6G in medical robotics: development of network allocation strategies for a telerobotic examination system. 医疗机器人中的 6G:为远程机器人检查系统开发网络分配策略。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-09-09 DOI: 10.1007/s11548-024-03260-6
Sven Kolb, Andrew Madden, Nicolai Kröger, Fidan Mehmeti, Franziska Jurosch, Lukas Bernhard, Wolfgang Kellerer, Dirk Wilhelm
{"title":"6G in medical robotics: development of network allocation strategies for a telerobotic examination system.","authors":"Sven Kolb, Andrew Madden, Nicolai Kröger, Fidan Mehmeti, Franziska Jurosch, Lukas Bernhard, Wolfgang Kellerer, Dirk Wilhelm","doi":"10.1007/s11548-024-03260-6","DOIUrl":"10.1007/s11548-024-03260-6","url":null,"abstract":"<p><strong>Purpose: </strong>Healthcare systems around the world are increasingly facing severe challenges due to problems such as staff shortage, changing demographics and the reliance on an often strongly human-dependent environment. One approach aiming to address these issues is the development of new telemedicine applications. The currently researched network standard 6G promises to deliver many new features which could be beneficial to leverage the full potential of emerging telemedical solutions and overcome the limitations of current network standards.</p><p><strong>Methods: </strong>We developed a telerobotic examination system with a distributed robot control infrastructure to investigate the benefits and challenges of distributed computing scenarios, such as fog computing, in medical applications. We investigate different software configurations for which we characterize the network traffic and computational loads and subsequently establish network allocation strategies for different types of modular application functions (MAFs).</p><p><strong>Results: </strong>The results indicate a high variability in the usage profiles of these MAFs, both in terms of computational load and networking behavior, which in turn allows the development of allocation strategies for different types of MAFs according to their requirements. Furthermore, the results provide a strong basis for further exploration of distributed computing scenarios in medical robotics.</p><p><strong>Conclusion: </strong>This work lays the foundation for the development of medical robotic applications using 6G network architectures and distributed computing scenarios, such as fog computing. In the future, we plan to investigate the capability to dynamically shift MAFs within the network based on current situational demand, which could help to further optimize the performance of network-based medical applications and play a role in addressing the increasingly critical challenges in healthcare.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"167-178"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759283/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Normscan: open-source Python software to create average models from CT scans. Normscan:从 CT 扫描结果创建平均模型的开源 Python 软件。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-05-20 DOI: 10.1007/s11548-024-03185-0
George R Nahass, Mitchell A Marques, Naji Bou Zeid, Linping Zhao, Lee W T Alkureishi
{"title":"Normscan: open-source Python software to create average models from CT scans.","authors":"George R Nahass, Mitchell A Marques, Naji Bou Zeid, Linping Zhao, Lee W T Alkureishi","doi":"10.1007/s11548-024-03185-0","DOIUrl":"10.1007/s11548-024-03185-0","url":null,"abstract":"<p><strong>Purpose: </strong>Age-matched average 3D models facilitate both surgical planning and intraoperative guidance of cranial birth defects such as craniosynostosis. We aimed to develop an algorithm that accepts any number of CT scans as input and generates highly accurate, average models with minimal user input that are ready for 3D printing and clinical use.</p><p><strong>Methods: </strong>Using a compiled database of 'normal' pediatric computed tomography (CT) scans, we report Normscan, an open-source platform built in Python that allows users to generate normative models of CT scans through user-defined landmarks. We use the basion, nasion, and left and right porions as anatomical landmarks for initial correspondence and then register the models using the iterative closest points algorithm before downstream averaging.</p><p><strong>Results: </strong>Normscan is fast and easy to use via our user interface and also creates highly accurate average models of any number of input models. Additionally, it is highly repeatable, with coefficients of variance for the surface area and volume of the average model being less than 3% across ten independent trials. Average models can then be 3D printed and/or visualized in augmented reality.</p><p><strong>Conclusions: </strong>Normscan provides an end-to-end pipeline for the creation of average models of skulls. These models can be used for the generation of databases of specific demographic anatomical models as well as for intraoperative guidance and surgical planning. While Normscan was designed for craniosynostosis repair, due to the modular nature of the algorithm, Normscan has many applications in other areas of surgical planning and research.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"157-165"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated segmentation and deep learning classification of ductopenic parotid salivary glands in sialo cone-beam CT images. 虹膜锥束 CT 图像中腮腺导管闭合性涎腺的自动分割和深度学习分类。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-07-31 DOI: 10.1007/s11548-024-03240-w
Elia Halle, Tevel Amiel, Doron J Aframian, Tal Malik, Avital Rozenthal, Oren Shauly, Leo Joskowicz, Chen Nadler, Talia Yeshua
{"title":"Automated segmentation and deep learning classification of ductopenic parotid salivary glands in sialo cone-beam CT images.","authors":"Elia Halle, Tevel Amiel, Doron J Aframian, Tal Malik, Avital Rozenthal, Oren Shauly, Leo Joskowicz, Chen Nadler, Talia Yeshua","doi":"10.1007/s11548-024-03240-w","DOIUrl":"10.1007/s11548-024-03240-w","url":null,"abstract":"<p><strong>Purpose: </strong>This study addressed the challenge of detecting and classifying the severity of ductopenia in parotid glands, a structural abnormality characterized by a reduced number of salivary ducts, previously shown to be associated with salivary gland impairment. The aim of the study was to develop an automatic algorithm designed to improve diagnostic accuracy and efficiency in analyzing ductopenic parotid glands using sialo cone-beam CT (sialo-CBCT) images.</p><p><strong>Methods: </strong>We developed an end-to-end automatic pipeline consisting of three main steps: (1) region of interest (ROI) computation, (2) parotid gland segmentation using the Frangi filter, and (3) ductopenia case classification with a residual neural network (RNN) augmented by multidirectional maximum intensity projection (MIP) images. To explore the impact of the first two steps, the RNN was trained on three datasets: (1) original MIP images, (2) MIP images with predefined ROIs, and (3) MIP images after segmentation.</p><p><strong>Results: </strong>Evaluation was conducted on 126 parotid sialo-CBCT scans of normal, moderate, and severe ductopenic cases, yielding a high performance of 100% for the ROI computation and 89% for the gland segmentation. Improvements in accuracy and F1 score were noted among the original MIP images (accuracy: 0.73, F1 score: 0.53), ROI-predefined images (accuracy: 0.78, F1 score: 0.56), and segmented images (accuracy: 0.95, F1 score: 0.90). Notably, ductopenic detection sensitivity was 0.99 in the segmented dataset, highlighting the capabilities of the algorithm in detecting ductopenic cases.</p><p><strong>Conclusions: </strong>Our method, which combines classical image processing and deep learning techniques, offers a promising solution for automatic detection of parotid glands ductopenia in sialo-CBCT scans. This may be used for further research aimed at understanding the role of presence and severity of ductopenia in salivary gland dysfunction.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"21-30"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented reality for endoscopic transsphenoidal surgery: evaluating design factors with neurosurgeons. 用于内窥镜经蝶手术的增强现实技术:与神经外科医生一起评估设计因素。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-07-26 DOI: 10.1007/s11548-024-03225-9
Jennifer Higa, Sonia Nkatha, Roxana Ramirez Herrera, Hani Marcus, Soojeong Yoo, Ann Blandford, Jeremy Opie
{"title":"Augmented reality for endoscopic transsphenoidal surgery: evaluating design factors with neurosurgeons.","authors":"Jennifer Higa, Sonia Nkatha, Roxana Ramirez Herrera, Hani Marcus, Soojeong Yoo, Ann Blandford, Jeremy Opie","doi":"10.1007/s11548-024-03225-9","DOIUrl":"10.1007/s11548-024-03225-9","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigates the potential utility of augmented reality (AR) in the endoscopic transsphenoidal approach (TSA). While previous research has addressed technical challenges in AR for TSA, this paper explores how design factors can improve AR for neurosurgeons from a human-centred design perspective.</p><p><strong>Methods: </strong>Preliminary qualitative research involved observations of TSA procedures ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>2</mn></mrow> </math> ) and semi-structured interviews with neurosurgeons ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>4</mn></mrow> </math> ). These informed the design of an AR mockup, which was evaluated with neurosurgeons ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>3</mn></mrow> </math> ). An interactive low-fidelity prototype-the \"AR-assisted Navigation for the TransSphenoidal Approach (ANTSA)\"-was then developed in Unity 3D. A user study ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>4</mn></mrow> </math> ) evaluated the low-fidelity prototype of ANTSA through contextual interviews, providing feedback on design factors.</p><p><strong>Results: </strong>AR visualisations may be beneficial in streamlining the sellar phase and reducing intraoperative errors such as excessive or inadequate exposure. Key design recommendations include a lean mesh rendering, an intuitive colour palette, and optional structure highlighting.</p><p><strong>Conclusion: </strong>This research presents user-centred design guidelines to improve sensemaking and surgical workflow in the sellar phase of TSA, with the goal of improving clinical outcomes. The specific improvements that AR could bring to the workflow are discussed along with surgeons' reservations and its possible application towards training less experienced physicians.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"131-136"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759473/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141767933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fully automatic fiducial detection and correspondence establishing method for online C-arm calibration. 用于在线 C 臂校准的全自动靶标检测和对应关系建立方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-05-10 DOI: 10.1007/s11548-024-03162-7
Wenyuan Sun, Xiaoyang Zou, Guoyan Zheng
{"title":"A fully automatic fiducial detection and correspondence establishing method for online C-arm calibration.","authors":"Wenyuan Sun, Xiaoyang Zou, Guoyan Zheng","doi":"10.1007/s11548-024-03162-7","DOIUrl":"10.1007/s11548-024-03162-7","url":null,"abstract":"<p><strong>Purpose: </strong>Online C-arm calibration with a mobile fiducial cage plays an essential role in various image-guided interventions. However, it is challenging to develop a fully automatic approach, which requires not only an accurate detection of fiducial projections but also a robust 2D-3D correspondence establishment.</p><p><strong>Methods: </strong>We propose a novel approach for online C-arm calibration with a mobile fiducial cage. Specifically, a novel mobile calibration cage embedded with 16 fiducials is designed, where the fiducials are arranged to form 4 line patterns with different cross-ratios. Then, an auto-context-based detection network (ADNet) is proposed to perform an accurate and robust detection of 2D projections of those fiducials in acquired C-arm images. Subsequently, we present a cross-ratio consistency-based 2D-3D correspondence establishing method to automatically match the detected 2D fiducial projections with those 3D fiducials, allowing for an accurate online C-arm calibration.</p><p><strong>Results: </strong>We designed and conducted comprehensive experiments to evaluate the proposed approach. For automatic detection of 2D fiducial projections, the proposed ADNet achieved a mean point-to-point distance of <math><mrow><mn>0.65</mn> <mo>±</mo> <mn>1.33</mn></mrow> </math> pixels. Additionally, the proposed C-arm calibration approach achieved a mean re-projection error of <math><mrow><mn>1.01</mn> <mo>±</mo> <mn>0.63</mn></mrow> </math> pixels and a mean point-to-line distance of <math><mrow><mn>0.22</mn> <mo>±</mo> <mn>0.12</mn></mrow> </math>  mm. When the proposed C-arm calibration approach was applied to downstream tasks involving landmark and surface model reconstruction, sub-millimeter accuracy was achieved.</p><p><strong>Conclusion: </strong>In summary, we developed a novel approach for online C-arm calibration. Both qualitative and quantitative results of comprehensive experiments demonstrated the accuracy and robustness of the proposed approach. Our approach holds potentials for various image-guided interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"43-55"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140905138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart goggles augmented reality CT-US fusion compared to conventional fusion navigation for percutaneous needle insertion. 智能护目镜增强现实CT-US融合与经皮穿刺针传统融合导航的比较。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-05-30 DOI: 10.1007/s11548-024-03148-5
Tabea Borde, Laetitia Saccenti, Ming Li, Nicole A Varble, Lindsey A Hazen, Michael T Kassin, Ifechi N Ukeh, Keith M Horton, Jose F Delgado, Charles Martin, Sheng Xu, William F Pritchard, John W Karanian, Bradford J Wood
{"title":"Smart goggles augmented reality CT-US fusion compared to conventional fusion navigation for percutaneous needle insertion.","authors":"Tabea Borde, Laetitia Saccenti, Ming Li, Nicole A Varble, Lindsey A Hazen, Michael T Kassin, Ifechi N Ukeh, Keith M Horton, Jose F Delgado, Charles Martin, Sheng Xu, William F Pritchard, John W Karanian, Bradford J Wood","doi":"10.1007/s11548-024-03148-5","DOIUrl":"10.1007/s11548-024-03148-5","url":null,"abstract":"<p><strong>Purpose: </strong>Targeting accuracy determines outcomes for percutaneous needle interventions. Augmented reality (AR) in IR may improve procedural guidance and facilitate access to complex locations. This study aimed to evaluate percutaneous needle placement accuracy using a goggle-based AR system compared to an ultrasound (US)-based fusion navigation system.</p><p><strong>Methods: </strong>Six interventional radiologists performed 24 independent needle placements in an anthropomorphic phantom (CIRS 057A) in four needle guidance cohorts (n = 6 each): (1) US-based fusion, (2) goggle-based AR with stereoscopically projected anatomy (AR-overlay), (3) goggle AR without the projection (AR-plain), and (4) CT-guided freehand. US-based fusion included US/CT registration with electromagnetic (EM) needle, transducer, and patient tracking. For AR-overlay, US, EM-tracked needle, stereoscopic anatomical structures and targets were superimposed over the phantom. Needle placement accuracy (distance from needle tip to target center), placement time (from skin puncture to final position), and procedure time (time to completion) were measured.</p><p><strong>Results: </strong>Mean needle placement accuracy using US-based fusion, AR-overlay, AR-plain, and freehand was 4.5 ± 1.7 mm, 7.0 ± 4.7 mm, 4.7 ± 1.7 mm, and 9.2 ± 5.8 mm, respectively. AR-plain demonstrated comparable accuracy to US-based fusion (p = 0.7) and AR-overlay (p = 0.06). Excluding two outliers, AR-overlay accuracy became 5.9 ± 2.6 mm. US-based fusion had the highest mean placement time (44.3 ± 27.7 s) compared to all navigation cohorts (p < 0.001). Longest procedure times were recorded with AR-overlay (34 ± 10.2 min) compared to AR-plain (22.7 ± 8.6 min, p = 0.09), US-based fusion (19.5 ± 5.6 min, p = 0.02), and freehand (14.8 ± 1.6 min, p = 0.002).</p><p><strong>Conclusion: </strong>Goggle-based AR showed no difference in needle placement accuracy compared to the commercially available US-based fusion navigation platform. Differences in accuracy and procedure times were apparent with different display modes (with/without stereoscopic projections). The AR-based projection of the US and needle trajectory over the body may be a helpful tool to enhance visuospatial orientation. Thus, this study refines the potential role of AR for needle placements, which may serve as a catalyst for informed implementation of AR techniques in IR.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"107-115"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758159/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141176062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational fluid dynamics and shape analysis enhance aneurysm rupture risk stratification. 计算流体动力学和形状分析增强了动脉瘤破裂风险分层。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-11-17 DOI: 10.1007/s11548-024-03289-7
Ivan Benemerito, Frederick Ewbank, Andrew Narracott, Maria-Cruz Villa-Uriol, Ana Paula Narata, Umang Patel, Diederik Bulters, Alberto Marzo
{"title":"Computational fluid dynamics and shape analysis enhance aneurysm rupture risk stratification.","authors":"Ivan Benemerito, Frederick Ewbank, Andrew Narracott, Maria-Cruz Villa-Uriol, Ana Paula Narata, Umang Patel, Diederik Bulters, Alberto Marzo","doi":"10.1007/s11548-024-03289-7","DOIUrl":"10.1007/s11548-024-03289-7","url":null,"abstract":"<p><strong>Purpose: </strong>Accurately quantifying the rupture risk of unruptured intracranial aneurysms (UIAs) is crucial for guiding treatment decisions and remains an unmet clinical challenge. Computational Flow Dynamics and morphological measurements have been shown to differ between ruptured and unruptured aneurysms. It is not clear if these provide any additional information above routinely available clinical observations or not. Therefore, this study investigates whether incorporating image-derived features into the established PHASES score can improve the classification of aneurysm rupture status.</p><p><strong>Methods: </strong>A cross-sectional dataset of 170 patients (78 with ruptured aneurysm) was used. Computational fluid dynamics (CFD) and shape analysis were performed on patients' images to extract additional features. These derived features were combined with PHASES variables to develop five ridge constrained logistic regression models for classifying the aneurysm rupture status. Correlation analysis and principal component analysis were employed for image-derived feature reduction. The dataset was split into training and validation subsets, and a ten-fold cross validation strategy with grid search optimisation and bootstrap resampling was adopted for determining the models' coefficients. Models' performances were evaluated using the area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>The logistic regression model based solely on PHASES achieved AUC of 0.63. All models incorporating derived features from CFD and shape analysis demonstrated improved performance, reaching an AUC of 0.71. Non-sphericity index (shape variable) and maximum oscillatory shear index (CFD variable) were the strongest predictors of a ruptured status.</p><p><strong>Conclusion: </strong>This study demonstrates the benefits of integrating image-based fluid dynamics and shape analysis with clinical data for improving the classification accuracy of aneurysm rupture status. Further evaluation using longitudinal data is needed to assess the potential for clinical integration.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"31-41"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757871/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CARS 2025 Computer Assisted Radiology and Surgery - 40th Anniversary and reflections on the role of modelling and AI. 汽车2025计算机辅助放射学和外科- 40周年纪念和对建模和人工智能的作用的反思。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-12-23 DOI: 10.1007/s11548-024-03304-x
H U Lemke
{"title":"CARS 2025 Computer Assisted Radiology and Surgery - 40th Anniversary and reflections on the role of modelling and AI.","authors":"H U Lemke","doi":"10.1007/s11548-024-03304-x","DOIUrl":"10.1007/s11548-024-03304-x","url":null,"abstract":"<p><strong>Purpose: </strong>Based on CARS Congress events selected from its 40 year history, this editorial summarises the main challenges and solution concepts encountered, and what the future may hold for a model-centric world view in the specific domain of computer assisted radiology and surgery.</p><p><strong>Methods: </strong>Altogether some 15,000 publications appeared in the CAR/CARS Congress Proceedings and Journal between 1985 and 2025, comprising approximately 3000 full papers and 12,000 long abstracts. Modelling occupied a central theme in many of these publications, particularly in the 2020s. Carefully selected statements from this period made by CARS Congress Presidents, members of the CARS Organizing Committee, Invited Speakers and Authors, serve to show how the CARS community has contributed to the evolution of model guided medicine.</p><p><strong>Results: </strong>In terms of what can be derived from the modelling related themes and observations on their frequency in recent CARS Congress publications, there is an evolving timely focus on different aspects of a model guided medicine, such as methods and tools for situational models, process models and network models for distributed (AI) model services in health care. This indicates the need for multicentre AI models and associated clinical translational studies as well as model quality assurance, i.e. model verification, validation and evaluation.</p><p><strong>Conclusion: </strong>The modelling-related themes presented at recent CARS Congresses are a first modest attempt to raise attention in the international community, particularly in the domain of radiology, surgery and informatics, that with modern modelling methods and tools, fundamental changes can be expected in health care. There is still a long way to go, as the awareness for model guided medicine is only gradually increasing. Future capabilities and acceptance of model guided medicine are strongly related to human-curated modelling methods and tools, as well as on effective human-computer interaction.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1-10"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural patient-specific 3D-2D registration in laparoscopic liver resection. 腹腔镜肝脏切除术中特定患者的三维-二维神经注册。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-01 Epub Date: 2024-07-16 DOI: 10.1007/s11548-024-03231-x
Islem Mhiri, Daniel Pizarro, Adrien Bartoli
{"title":"Neural patient-specific 3D-2D registration in laparoscopic liver resection.","authors":"Islem Mhiri, Daniel Pizarro, Adrien Bartoli","doi":"10.1007/s11548-024-03231-x","DOIUrl":"10.1007/s11548-024-03231-x","url":null,"abstract":"<p><strong>Purpose: </strong>Augmented reality guidance in laparoscopic liver resection requires the registration of a preoperative 3D model to the intraoperative 2D image. However, 3D-2D liver registration poses challenges owing to the liver's flexibility, particularly in the limited visibility conditions of laparoscopy. Although promising, the current registration methods are computationally expensive and often necessitate manual initialisation.</p><p><strong>Methods: </strong>The first neural model predicting the registration (NM) is proposed, represented as 3D model deformation coefficients, from image landmarks. The strategy consists in training a patient-specific model based on synthetic data generated automatically from the patient's preoperative model. A liver shape modelling technique, which further reduces time complexity, is also proposed.</p><p><strong>Results: </strong>The NM method was evaluated using the target registration error measure, showing an accuracy on par with existing methods, all based on numerical optimisation. Notably, NM runs much faster, offering the possibility of achieving real-time inference, a significant step ahead in this field.</p><p><strong>Conclusion: </strong>The proposed method represents the first neural method for 3D-2D liver registration. Preliminary experimental findings show comparable performance to existing methods, with superior computational efficiency. These results suggest a potential to deeply impact liver registration techniques.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"57-64"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信