International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Intraoperative adaptive eye model based on instrument-integrated OCT for robot-assisted vitreoretinal surgery.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-08 DOI: 10.1007/s11548-025-03325-0
Marius Briel, Ludwig Haide, Maximilian Hess, Jan Schimmelpfennig, Philipp Matten, Rebekka Peter, Matthias Hillenbrand, Eleonora Tagliabue, Franziska Mathis-Ullrich
{"title":"Intraoperative adaptive eye model based on instrument-integrated OCT for robot-assisted vitreoretinal surgery.","authors":"Marius Briel, Ludwig Haide, Maximilian Hess, Jan Schimmelpfennig, Philipp Matten, Rebekka Peter, Matthias Hillenbrand, Eleonora Tagliabue, Franziska Mathis-Ullrich","doi":"10.1007/s11548-025-03325-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03325-0","url":null,"abstract":"<p><strong>Purpose: </strong>Pars plana vitrectomy (PPV) is the most common surgical procedure performed by retinal specialists, highlighting the need for model-based assistance and automation in surgical treatment. An intraoperative retinal model provides precise anatomical information relative to the surgical instrument, enhancing surgical precision and safety.</p><p><strong>Methods: </strong>This work focuses on the intraoperative parametrization of retinal shape using 1D instrument-integrated optical coherence tomography distance measurements combined with a surgical robot. Our approach accommodates variability in eye geometries by transitioning from an initial spherical model to an ellipsoidal representation, improving accuracy as more data is collected through sensor motion.</p><p><strong>Results: </strong>We demonstrate that ellipsoid fitting outperforms sphere fitting for regular eye shapes, achieving a mean absolute error of less than 40  <math><mrow><mi>μ</mi> <mtext>m</mtext></mrow> </math> in simulation and below 200  <math><mrow><mi>μ</mi> <mtext>m</mtext></mrow> </math> on 3D printed models and ex vivo porcine eyes. The model reliably transitions from a spherical to an ellipsoidal representation across all six tested eye shapes when specific criteria are satisfied.</p><p><strong>Conclusion: </strong>The adaptive eye model developed in this work meets the accuracy requirements for clinical application in PPV within the central retina. Additionally, the global model effectively extrapolates beyond the scanned area to encompass the retinal periphery.This capability enhances PPV procedures, particularly through virtual boundary assistance and improved surgical navigation, ultimately contributing to safer surgical outcomes.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143374209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-driven method for safe and effective ERCP cannulation.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-07 DOI: 10.1007/s11548-025-03329-w
Yuying Liu, Xin Chen, Siyang Zuo
{"title":"A deep learning-driven method for safe and effective ERCP cannulation.","authors":"Yuying Liu, Xin Chen, Siyang Zuo","doi":"10.1007/s11548-025-03329-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03329-w","url":null,"abstract":"<p><strong>Purpose: </strong>In recent years, the detection of the duodenal papilla and surgical cannula has become a critical task in computer-assisted endoscopic retrograde cholangiopancreatography (ERCP) cannulation operations. The complex surgical anatomy, coupled with the small size of the duodenal papillary orifice and its high similarity to the background, poses significant challenges to effective computer-assisted cannulation. To address these challenges, we present a deep learning-driven graphical user interface (GUI) to assist ERCP cannulation.</p><p><strong>Methods: </strong>Considering the characteristics of the ERCP scenario, we propose a deep learning method for duodenal papilla and surgical cannula detection, utilizing four swin transformer decoupled heads (4STDH). Four different prediction heads are employed to detect objects of different sizes. Subsequently, we integrate the swin transformer module to identify attention regions to explore prediction potential deeply. Moreover, we decouple the classification and regression networks, significantly improving the model's accuracy and robustness through the separation prediction. Simultaneously, we introduce a dataset on papilla and cannula (DPAC), consisting of 1840 annotated endoscopic images, which will be publicly available. We integrated 4STDH and several state-of-the-art methods into the GUI and compared them.</p><p><strong>Results: </strong>On the DPAC dataset, 4STDH outperforms state-of-the-art methods with an mAP of 93.2% and superior generalization performance. Additionally, the GUI provides real-time positions of the papilla and cannula, along with the planar distance and direction required for the cannula to reach the cannulation position.</p><p><strong>Conclusion: </strong>We validate the GUI's performance in human gastrointestinal endoscopic videos, showing deep learning's potential to enhance the safety and efficiency of clinical ERCP cannulation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
German surgeons' perspective on the application of artificial intelligence in clinical decision-making.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-05 DOI: 10.1007/s11548-025-03326-z
Jonas Henn, Tijs Vandemeulebroucke, Simon Hatterscheidt, Jonas Dohmen, Jörg C Kalff, Aimee van Wynsberghe, Hanno Matthaei
{"title":"German surgeons' perspective on the application of artificial intelligence in clinical decision-making.","authors":"Jonas Henn, Tijs Vandemeulebroucke, Simon Hatterscheidt, Jonas Dohmen, Jörg C Kalff, Aimee van Wynsberghe, Hanno Matthaei","doi":"10.1007/s11548-025-03326-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03326-z","url":null,"abstract":"<p><strong>Purpose: </strong>Artificial intelligence (AI) is transforming clinical decision-making (CDM). This application of AI should be a conscious choice to avoid technological determinism. The surgeons' perspective is needed to guide further implementation.</p><p><strong>Methods: </strong>We conducted an online survey among German surgeons, focusing on digitalization and AI in CDM, specifically for acute abdominal pain (AAP). The survey included Likert items and scales.</p><p><strong>Results: </strong>We analyzed 263 responses. Seventy-one percentage of participants were male, with a median age of 49 years (IQR 41-57). Seventy-three percentage of participants carried out a senior role, with a median of 22 years of work experience (IQR 13-28). AI in CDM was seen as helpful for workload management (48%) but not for preventing unnecessary treatments (32%). Safety (95%), evidence (94%), and usability (96%) were prioritized over costs (43%) for the implementation. Concerns included the loss of practical CDM skills (81%) and ethical issues like transparency (52%), patient trust (45%), and physician integrity (44%). Traditional CDM for AAP was seen as experience-based (93%) and not standardized (31%), whereas AI was perceived to assist with urgency triage (60%) and resource management (59%). On median, generation Y showed more confidence in AI for CDM (P = 0.001), while participants working in primary care hospitals were less confident (P = 0.021).</p><p><strong>Conclusion: </strong>Participants saw the potential of AI for organizational tasks but are hesitant about its use in CDM. Concerns about trust and performance need to be addressed through education and critical evaluation. In the future, AI might provide sufficient decision support but will not replace the human component.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal dataset creation for federated learning with DICOM-structured reports.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-03 DOI: 10.1007/s11548-025-03327-y
Malte Tölle, Lukas Burger, Halvar Kelm, Florian André, Peter Bannas, Gerhard Diller, Norbert Frey, Philipp Garthe, Stefan Groß, Anja Hennemuth, Lars Kaderali, Nina Krüger, Andreas Leha, Simon Martin, Alexander Meyer, Eike Nagel, Stefan Orwat, Clemens Scherer, Moritz Seiffert, Jan Moritz Seliger, Stefan Simm, Tim Friede, Tim Seidler, Sandy Engelhardt
{"title":"Multi-modal dataset creation for federated learning with DICOM-structured reports.","authors":"Malte Tölle, Lukas Burger, Halvar Kelm, Florian André, Peter Bannas, Gerhard Diller, Norbert Frey, Philipp Garthe, Stefan Groß, Anja Hennemuth, Lars Kaderali, Nina Krüger, Andreas Leha, Simon Martin, Alexander Meyer, Eike Nagel, Stefan Orwat, Clemens Scherer, Moritz Seiffert, Jan Moritz Seliger, Stefan Simm, Tim Friede, Tim Seidler, Sandy Engelhardt","doi":"10.1007/s11548-025-03327-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03327-y","url":null,"abstract":"<p><p>Purpose Federated training is often challenging on heterogeneous datasets due to divergent data storage options, inconsistent naming schemes, varied annotation procedures, and disparities in label quality. This is particularly evident in the emerging multi-modal learning paradigms, where dataset harmonization including a uniform data representation and filtering options are of paramount importance.Methods DICOM-structured reports enable the standardized linkage of arbitrary information beyond the imaging domain and can be used within Python deep learning pipelines with highdicom. Building on this, we developed an open platform for data integration with interactive filtering capabilities, thereby simplifying the process of creation of patient cohorts over several sites with consistent multi-modal data.Results In this study, we extend our prior work by showing its applicability to more and divergent data types, as well as streamlining datasets for federated training within an established consortium of eight university hospitals in Germany. We prove its concurrent filtering ability by creating harmonized multi-modal datasets across all locations for predicting the outcome after minimally invasive heart valve replacement. The data include imaging and waveform data (i.e., computed tomography images, electrocardiography scans) as well as annotations (i.e., calcification segmentations, and pointsets), and metadata (i.e., prostheses and pacemaker dependency).Conclusion Structured reports bridge the traditional gap between imaging systems and information systems. Utilizing the inherent DICOM reference system arbitrary data types can be queried concurrently to create meaningful cohorts for multi-centric data analysis. The graphical interface as well as example structured report templates are available at https://github.com/Cardio-AI/fl-multi-modal-dataset-creation .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseSeg: joint learning for semantic segmentation and landmark detection using dense image-to-shape representation.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-23 DOI: 10.1007/s11548-024-03315-8
Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich
{"title":"DenseSeg: joint learning for semantic segmentation and landmark detection using dense image-to-shape representation.","authors":"Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich","doi":"10.1007/s11548-024-03315-8","DOIUrl":"https://doi.org/10.1007/s11548-024-03315-8","url":null,"abstract":"<p><strong>Purpose: </strong>Semantic segmentation and landmark detection are fundamental tasks of medical image processing, facilitating further analysis of anatomical objects. Although deep learning-based pixel-wise classification has set a new-state-of-the-art for segmentation, it falls short in landmark detection, a strength of shape-based approaches.</p><p><strong>Methods: </strong>In this work, we propose a dense image-to-shape representation that enables the joint learning of landmarks and semantic segmentation by employing a fully convolutional architecture. Our method intuitively allows the extraction of arbitrary landmarks due to its representation of anatomical correspondences. We benchmark our method against the state-of-the-art for semantic segmentation (nnUNet), a shape-based approach employing geometric deep learning and a convolutional neural network-based method for landmark detection.</p><p><strong>Results: </strong>We evaluate our method on two medical datasets: one common benchmark featuring the lungs, heart, and clavicle from thorax X-rays, and another with 17 different bones in the paediatric wrist. While our method is on par with the landmark detection baseline in the thorax setting (error in mm of <math><mrow><mn>2.6</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> vs. <math><mrow><mn>2.7</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> ), it substantially surpassed it in the more complex wrist setting ( <math><mrow><mn>1.1</mn> <mo>±</mo> <mn>0.6</mn></mrow> </math> vs. <math><mrow><mn>1.9</mn> <mo>±</mo> <mn>0.5</mn></mrow> </math> ).</p><p><strong>Conclusion: </strong>We demonstrate that dense geometric shape representation is beneficial for challenging landmark detection tasks and outperforms previous state-of-the-art using heatmap regression. While it does not require explicit training on the landmarks themselves, allowing for the addition of new landmarks without necessitating retraining.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volume and quality of the gluteal muscles are associated with early physical function after total hip arthroplasty. 臀肌的体积和质量与全髋关节置换术后早期的身体功能有关。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-21 DOI: 10.1007/s11548-025-03321-4
Makoto Iwasa, Keisuke Uemura, Mazen Soufi, Yoshito Otake, Tomofumi Kinoshita, Tatsuhiko Kutsuna, Kazuma Takashima, Hidetoshi Hamada, Yoshinobu Sato, Nobuhiko Sugano, Seiji Okada, Masaki Takao
{"title":"Volume and quality of the gluteal muscles are associated with early physical function after total hip arthroplasty.","authors":"Makoto Iwasa, Keisuke Uemura, Mazen Soufi, Yoshito Otake, Tomofumi Kinoshita, Tatsuhiko Kutsuna, Kazuma Takashima, Hidetoshi Hamada, Yoshinobu Sato, Nobuhiko Sugano, Seiji Okada, Masaki Takao","doi":"10.1007/s11548-025-03321-4","DOIUrl":"https://doi.org/10.1007/s11548-025-03321-4","url":null,"abstract":"<p><strong>Purpose: </strong>Identifying muscles linked to postoperative physical function can guide protocols to enhance early recovery following total hip arthroplasty (THA). This study aimed to evaluate the association of preoperative pelvic and thigh muscle volume and quality with early physical function after THA in patients with unilateral hip osteoarthritis (HOA).</p><p><strong>Methods: </strong>Preoperative Computed tomography (CT) images of 61 patients (eight males and 53 females) with HOA were analyzed. Six muscle groups were segmented from CT images, and muscle volume and quality were calculated on the healthy and affected sides. Muscle quality was quantified using the mean CT values (Hounsfield units [HU]). Early postoperative physical function was evaluated using the Timed Up & Go test (TUG) at three weeks after THA. The effect of preoperative muscle volume and quality of both sides on early postoperative physical function was assessed.</p><p><strong>Results: </strong>On the healthy and affected sides, mean muscle mass was 9.7 cm<sup>3</sup>/kg and 8.1 cm<sup>3</sup>/kg, and mean muscle HU values were 46.0 HU and 39.1 HU, respectively. Significant differences in muscle volume and quality were observed between the affected and healthy sides. On analyzing the function of various muscle groups, the TUG score showed a significant association with the gluteus maximum volume and the gluteus medius/minimus quality on the affected side.</p><p><strong>Conclusion: </strong>Patients with HOA showed significant muscle atrophy and fatty degeneration in the affected pelvic and thigh regions. The gluteus maximum volume and gluteus medius/minimus quality were associated with early postoperative physical function. Preoperative rehabilitation targeting the gluteal muscles on the affected side could potentially enhance recovery of physical function in the early postoperative period.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perfusion estimation from dynamic non-contrast computed tomography using self-supervised learning and a physics-inspired U-net transformer architecture. 使用自监督学习和物理启发的U-net变压器架构的动态非对比计算机断层扫描灌注估计。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-20 DOI: 10.1007/s11548-025-03323-2
Yi-Kuan Liu, Jorge Cisneros, Girish Nair, Craig Stevens, Richard Castillo, Yevgeniy Vinogradskiy, Edward Castillo
{"title":"Perfusion estimation from dynamic non-contrast computed tomography using self-supervised learning and a physics-inspired U-net transformer architecture.","authors":"Yi-Kuan Liu, Jorge Cisneros, Girish Nair, Craig Stevens, Richard Castillo, Yevgeniy Vinogradskiy, Edward Castillo","doi":"10.1007/s11548-025-03323-2","DOIUrl":"https://doi.org/10.1007/s11548-025-03323-2","url":null,"abstract":"<p><strong>Purpose: </strong>Pulmonary perfusion imaging is a key lung health indicator with clinical utility as a diagnostic and treatment planning tool. However, current nuclear medicine modalities face challenges like low spatial resolution and long acquisition times which limit clinical utility to non-emergency settings and often placing extra financial burden on the patient. This study introduces a novel deep learning approach to predict perfusion imaging from non-contrast inhale and exhale computed tomography scans (IE-CT).</p><p><strong>Methods: </strong>We developed a U-Net Transformer architecture modified for Siamese IE-CT inputs, integrating insights from physical models and utilizing a self-supervised learning strategy tailored for lung function prediction. We aggregated 523 IE-CT images from nine different 4DCT imaging datasets for self-supervised training, aiming to learn a low-dimensional IE-CT feature space by reconstructing image volumes from random data augmentations. Supervised training for perfusion prediction used this feature space and transfer learning on a cohort of 44 patients who had both IE-CT and single-photon emission CT (SPECT/CT) perfusion scans.</p><p><strong>Results: </strong>Testing with random bootstrapping, we estimated the mean and standard deviation of the spatial Spearman correlation between our predictions and the ground truth (SPECT perfusion) to be 0.742 ± 0.037, with a mean median correlation of 0.792 ± 0.036. These results represent a new state-of-the-art accuracy for predicting perfusion imaging from non-contrast CT.</p><p><strong>Conclusion: </strong>Our approach combines low-dimensional feature representations of both inhale and exhale images into a deep learning model, aligning with previous physical modeling methods for characterizing perfusion from IE-CT. This likely contributes to the high spatial correlation with ground truth. With further development, our method could provide faster and more accurate lung function imaging, potentially expanding its clinical applications beyond what is currently possible with nuclear medicine.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-guided erasing for enhanced transfer learning in breast abnormality classification. 注意引导擦除增强乳房异常分类中的迁移学习。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-15 DOI: 10.1007/s11548-024-03317-6
Adarsh Bhandary Panambur, Sheethal Bhat, Hui Yu, Prathmesh Madhu, Siming Bayer, Andreas Maier
{"title":"Attention-guided erasing for enhanced transfer learning in breast abnormality classification.","authors":"Adarsh Bhandary Panambur, Sheethal Bhat, Hui Yu, Prathmesh Madhu, Siming Bayer, Andreas Maier","doi":"10.1007/s11548-024-03317-6","DOIUrl":"https://doi.org/10.1007/s11548-024-03317-6","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer remains one of the most prevalent cancers globally, necessitating effective early screening and diagnosis. This study investigates the effectiveness and generalizability of our recently proposed data augmentation technique, attention-guided erasing (AGE), across various transfer learning classification tasks for breast abnormality classification in mammography.</p><p><strong>Methods: </strong>AGE utilizes attention head visualizations from DINO self-supervised pretraining to weakly localize regions of interest (ROI) in images. These localizations are then used to stochastically erase non-essential background information from training images during transfer learning. Our research evaluates AGE across two image-level and three patch-level classification tasks. The image-level tasks involve breast density categorization in digital mammography (DM) and malignancy classification in contrast-enhanced mammography (CEM). Patch-level tasks include classifying calcifications and masses in scanned film mammography (SFM), as well as malignancy classification of ROIs in CEM.</p><p><strong>Results: </strong>AGE significantly boosts classification performance with statistically significant improvements in mean F1-scores across four tasks compared to baselines. Specifically, for image-level classification of breast density in DM and malignancy in CEM, we achieve gains of 2% and 1.5%, respectively. Additionally, for patch-level classification of calcifications in SFM and CEM ROIs, gains of 0.4% and 0.6% are observed, respectively. However, marginal improvement is noted in the mass classification task, indicating the necessity for further optimization in tasks where critical features may be obscured by erasing techniques.</p><p><strong>Conclusion: </strong>Our findings underscore the potential of AGE, a dataset- and task-specific augmentation strategy powered by self-supervised learning, to enhance the downstream classification performance of DL models, particularly involving ViTs, in medical imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape-matching-based fracture reduction aid concept exemplified on the proximal humerus-a pilot study. 以肱骨近端为例的基于形状匹配的骨折复位辅助概念-一项试点研究。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-14 DOI: 10.1007/s11548-024-03318-5
Karen Mys, Luke Visscher, Sara Lindenmann, Torsten Pastor, Paolo Antonacci, Matthias Knobe, Martin Jaeger, Simon Lambert, Peter Varga
{"title":"Shape-matching-based fracture reduction aid concept exemplified on the proximal humerus-a pilot study.","authors":"Karen Mys, Luke Visscher, Sara Lindenmann, Torsten Pastor, Paolo Antonacci, Matthias Knobe, Martin Jaeger, Simon Lambert, Peter Varga","doi":"10.1007/s11548-024-03318-5","DOIUrl":"https://doi.org/10.1007/s11548-024-03318-5","url":null,"abstract":"<p><strong>Purpose: </strong>Optimizing fracture reduction quality is key to achieve successful osteosynthesis, especially for epimetaphyseal regions such as the proximal humerus (PH), but can be challenging, partly due to the lack of a clear endpoint. We aimed to develop the prototype for a novel intraoperative C-arm-based aid to facilitate true anatomical reduction of fractures of the PH.</p><p><strong>Methods: </strong>The proposed method designates the reduced endpoint position of fragments by superimposing the outer boundary of the premorbid bone shape on intraoperative C-arm images, taking the mirrored intact contralateral PH from the preoperative CT scan as a surrogate. The accuracy of the algorithm was tested on 60 synthetic C-arm images created from the preoperative CT images of 20 complex PH fracture cases (Dataset A) and on 12 real C-arm images of a prefractured human anatomical specimen (Dataset B). The predicted outer boundary shape was compared with the known exact solution by (1) a calculated matching error and (2) two experienced shoulder trauma surgeons.</p><p><strong>Results: </strong>A prediction accuracy of 88% (with 73% 'good') was achieved according to the calculation method and an 87% accuracy (68% 'good') by surgeon assessment in Dataset A. Accuracy was 100% by both assessments for Dataset B.</p><p><strong>Conclusion: </strong>By seamlessly integrating into the standard perioperative workflow and imaging, the intuitive shape-matching-based aid, once developed as a medical device, has the potential to optimize the accuracy of the reduction of PH fractures while reducing the number of X-rays and surgery time. Further studies are required to demonstrate the applicability and efficacy of this method in optimizing fracture reduction quality.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery. 机器人辅助手术中基于变压器模型的手术活动实时识别与预测方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-01-12 DOI: 10.1007/s11548-024-03306-9
Ketai Chen, D S V Bandara, Jumpei Arata
{"title":"A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery.","authors":"Ketai Chen, D S V Bandara, Jumpei Arata","doi":"10.1007/s11548-024-03306-9","DOIUrl":"https://doi.org/10.1007/s11548-024-03306-9","url":null,"abstract":"<p><strong>Purpose: </strong>This paper presents a deep learning approach to recognize and predict surgical activity in robot-assisted minimally invasive surgery (RAMIS). Our primary objective is to deploy the developed model for implementing a real-time surgical risk monitoring system within the realm of RAMIS.</p><p><strong>Methods: </strong>We propose a modified Transformer model with the architecture comprising no positional encoding, 5 fully connected layers, 1 encoder, and 3 decoders. This model is specifically designed to address 3 primary tasks in surgical robotics: gesture recognition, prediction, and end-effector trajectory prediction. Notably, it operates solely on kinematic data obtained from the joints of robotic arm.</p><p><strong>Results: </strong>The model's performance was evaluated on JHU-ISI Gesture and Skill Assessment Working Set dataset, achieving highest accuracy of 94.4% for gesture recognition, 84.82% for gesture prediction, and significantly low distance error of 1.34 mm with a prediction of 1 s in advance. Notably, the computational time per iteration was minimal recorded at only 4.2 ms.</p><p><strong>Conclusion: </strong>The results demonstrated the excellence of our proposed model compared to previous studies highlighting its potential for integration in real-time systems. We firmly believe that our model could significantly elevate realms of surgical activity recognition and prediction within RAS and make a substantial and meaningful contribution to the healthcare sector.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142973200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信