Philipp Aebischer, Lukas Anschuetz, Marco Caversaccio, Georgios Mantokoudis, Stefan Weder
{"title":"Quantitative in-vitro assessment of a novel robot-assisted system for cochlear implant electrode insertion.","authors":"Philipp Aebischer, Lukas Anschuetz, Marco Caversaccio, Georgios Mantokoudis, Stefan Weder","doi":"10.1007/s11548-024-03276-y","DOIUrl":"10.1007/s11548-024-03276-y","url":null,"abstract":"<p><strong>Purpose: </strong>As an increasing number of cochlear implant candidates exhibit residual inner ear function, hearing preservation strategies during implant insertion are gaining importance. Manual implantation is known to induce traumatic force and pressure peaks. In this study, we use a validated in-vitro model to comprehensively evaluate a novel surgical tool that addresses these challenges through motorized movement of a forceps.</p><p><strong>Methods: </strong>Using lateral wall electrodes, we examined two subgroups of insertions: 30 insertions were performed manually by experienced surgeons, and another 30 insertions were conducted with a robot-assisted system under the same surgeons' supervision. We utilized a realistic, validated model of the temporal bone. This model accurately reproduces intracochlear frictional conditions and allows for the synchronous recording of forces on intracochlear structures, intracochlear pressure, and the position and deformation of the electrode array within the scala tympani.</p><p><strong>Results: </strong>We identified a significant reduction in force variation during robot-assisted insertions compared to the conventional procedure, with average values of 12 mN/s and 32 mN/s, respectively. Robotic assistance was also associated with a significant reduction of strong pressure peaks and a 17 dB reduction in intracochlear pressure levels. Furthermore, our study highlights that the release of the insertion tool represents a critical phase requiring surgical training.</p><p><strong>Conclusion: </strong>Robotic assistance demonstrated more consistent insertion speeds compared to manual techniques. Its use can significantly reduce factors associated with intracochlear trauma, highlighting its potential for improved hearing preservation. Finally, the system does not mitigate the impact of subsequent surgical steps like electrode cable routing and cochlear access sealing, pointing to areas in need of further research.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"323-332"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11807918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyedsina Razavizadeh, Markus Kofler, Matthias Kunz, Joerg Kempfert, Ruediger Braun-Dullaeus, Janine Weidling, Bernhard Preim, Christian Hansen
{"title":"A virtual patient authoring tool for transcatheter aortic valve replacement.","authors":"Seyedsina Razavizadeh, Markus Kofler, Matthias Kunz, Joerg Kempfert, Ruediger Braun-Dullaeus, Janine Weidling, Bernhard Preim, Christian Hansen","doi":"10.1007/s11548-024-03293-x","DOIUrl":"10.1007/s11548-024-03293-x","url":null,"abstract":"<p><strong>Purpose: </strong>Computer-based medical training scenarios, derived from patient's records, often lack variability, modifiability, and availability. Furthermore, generating image datasets and creating scenarios is resource-intensive. Therefore, patient authoring tools for rapid dataset-independent creation of virtual patients (VPs) is a pressing need.</p><p><strong>Methods: </strong>An authoring tool and a virtual catheterization laboratory environment were developed. The tool allows customised VP generation through a real-time morphable heart model and Euroscore parameters. The generated VP can be examined inside the vCathLab using a fluoroscopy and monitoring device, both on desktop and immersive virtual reality. Seven board-certified experts evaluated the proposed method from three aspects, i.e. System Usability Scale, qualitative feedback, and its performance in VR.</p><p><strong>Results: </strong>All participants agreed that this method could provide the necessary information and is anatomically correct within an educational context. Its modifiability, variability, and simplicity were well recognised. The prototype achieved excellent usability score and considerable performance results.</p><p><strong>Conclusion: </strong>We present a highly variable VP authoring tool that enhances variability in medical training scenarios. Although this work does not aim to explore didactic aspects, the potential of using this approach in an educational context has been confirmed in our study. Accordingly, these aspects can benefit from a thorough investigation in the future. In addition, our tool can be improved to provide more realistic parameter ranges for procedure-specific cases.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"379-389"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11807921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich
{"title":"DenseSeg: joint learning for semantic segmentation and landmark detection using dense image-to-shape representation.","authors":"Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich","doi":"10.1007/s11548-024-03315-8","DOIUrl":"https://doi.org/10.1007/s11548-024-03315-8","url":null,"abstract":"<p><strong>Purpose: </strong>Semantic segmentation and landmark detection are fundamental tasks of medical image processing, facilitating further analysis of anatomical objects. Although deep learning-based pixel-wise classification has set a new-state-of-the-art for segmentation, it falls short in landmark detection, a strength of shape-based approaches.</p><p><strong>Methods: </strong>In this work, we propose a dense image-to-shape representation that enables the joint learning of landmarks and semantic segmentation by employing a fully convolutional architecture. Our method intuitively allows the extraction of arbitrary landmarks due to its representation of anatomical correspondences. We benchmark our method against the state-of-the-art for semantic segmentation (nnUNet), a shape-based approach employing geometric deep learning and a convolutional neural network-based method for landmark detection.</p><p><strong>Results: </strong>We evaluate our method on two medical datasets: one common benchmark featuring the lungs, heart, and clavicle from thorax X-rays, and another with 17 different bones in the paediatric wrist. While our method is on par with the landmark detection baseline in the thorax setting (error in mm of <math><mrow><mn>2.6</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> vs. <math><mrow><mn>2.7</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> ), it substantially surpassed it in the more complex wrist setting ( <math><mrow><mn>1.1</mn> <mo>±</mo> <mn>0.6</mn></mrow> </math> vs. <math><mrow><mn>1.9</mn> <mo>±</mo> <mn>0.5</mn></mrow> </math> ).</p><p><strong>Conclusion: </strong>We demonstrate that dense geometric shape representation is beneficial for challenging landmark detection tasks and outperforms previous state-of-the-art using heatmap regression. While it does not require explicit training on the landmarks themselves, allowing for the addition of new landmarks without necessitating retraining.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volume and quality of the gluteal muscles are associated with early physical function after total hip arthroplasty.","authors":"Makoto Iwasa, Keisuke Uemura, Mazen Soufi, Yoshito Otake, Tomofumi Kinoshita, Tatsuhiko Kutsuna, Kazuma Takashima, Hidetoshi Hamada, Yoshinobu Sato, Nobuhiko Sugano, Seiji Okada, Masaki Takao","doi":"10.1007/s11548-025-03321-4","DOIUrl":"https://doi.org/10.1007/s11548-025-03321-4","url":null,"abstract":"<p><strong>Purpose: </strong>Identifying muscles linked to postoperative physical function can guide protocols to enhance early recovery following total hip arthroplasty (THA). This study aimed to evaluate the association of preoperative pelvic and thigh muscle volume and quality with early physical function after THA in patients with unilateral hip osteoarthritis (HOA).</p><p><strong>Methods: </strong>Preoperative Computed tomography (CT) images of 61 patients (eight males and 53 females) with HOA were analyzed. Six muscle groups were segmented from CT images, and muscle volume and quality were calculated on the healthy and affected sides. Muscle quality was quantified using the mean CT values (Hounsfield units [HU]). Early postoperative physical function was evaluated using the Timed Up & Go test (TUG) at three weeks after THA. The effect of preoperative muscle volume and quality of both sides on early postoperative physical function was assessed.</p><p><strong>Results: </strong>On the healthy and affected sides, mean muscle mass was 9.7 cm<sup>3</sup>/kg and 8.1 cm<sup>3</sup>/kg, and mean muscle HU values were 46.0 HU and 39.1 HU, respectively. Significant differences in muscle volume and quality were observed between the affected and healthy sides. On analyzing the function of various muscle groups, the TUG score showed a significant association with the gluteus maximum volume and the gluteus medius/minimus quality on the affected side.</p><p><strong>Conclusion: </strong>Patients with HOA showed significant muscle atrophy and fatty degeneration in the affected pelvic and thigh regions. The gluteus maximum volume and gluteus medius/minimus quality were associated with early postoperative physical function. Preoperative rehabilitation targeting the gluteal muscles on the affected side could potentially enhance recovery of physical function in the early postoperative period.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi-Kuan Liu, Jorge Cisneros, Girish Nair, Craig Stevens, Richard Castillo, Yevgeniy Vinogradskiy, Edward Castillo
{"title":"Perfusion estimation from dynamic non-contrast computed tomography using self-supervised learning and a physics-inspired U-net transformer architecture.","authors":"Yi-Kuan Liu, Jorge Cisneros, Girish Nair, Craig Stevens, Richard Castillo, Yevgeniy Vinogradskiy, Edward Castillo","doi":"10.1007/s11548-025-03323-2","DOIUrl":"https://doi.org/10.1007/s11548-025-03323-2","url":null,"abstract":"<p><strong>Purpose: </strong>Pulmonary perfusion imaging is a key lung health indicator with clinical utility as a diagnostic and treatment planning tool. However, current nuclear medicine modalities face challenges like low spatial resolution and long acquisition times which limit clinical utility to non-emergency settings and often placing extra financial burden on the patient. This study introduces a novel deep learning approach to predict perfusion imaging from non-contrast inhale and exhale computed tomography scans (IE-CT).</p><p><strong>Methods: </strong>We developed a U-Net Transformer architecture modified for Siamese IE-CT inputs, integrating insights from physical models and utilizing a self-supervised learning strategy tailored for lung function prediction. We aggregated 523 IE-CT images from nine different 4DCT imaging datasets for self-supervised training, aiming to learn a low-dimensional IE-CT feature space by reconstructing image volumes from random data augmentations. Supervised training for perfusion prediction used this feature space and transfer learning on a cohort of 44 patients who had both IE-CT and single-photon emission CT (SPECT/CT) perfusion scans.</p><p><strong>Results: </strong>Testing with random bootstrapping, we estimated the mean and standard deviation of the spatial Spearman correlation between our predictions and the ground truth (SPECT perfusion) to be 0.742 ± 0.037, with a mean median correlation of 0.792 ± 0.036. These results represent a new state-of-the-art accuracy for predicting perfusion imaging from non-contrast CT.</p><p><strong>Conclusion: </strong>Our approach combines low-dimensional feature representations of both inhale and exhale images into a deep learning model, aligning with previous physical modeling methods for characterizing perfusion from IE-CT. This likely contributes to the high spatial correlation with ground truth. With further development, our method could provide faster and more accurate lung function imaging, potentially expanding its clinical applications beyond what is currently possible with nuclear medicine.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention-guided erasing for enhanced transfer learning in breast abnormality classification.","authors":"Adarsh Bhandary Panambur, Sheethal Bhat, Hui Yu, Prathmesh Madhu, Siming Bayer, Andreas Maier","doi":"10.1007/s11548-024-03317-6","DOIUrl":"https://doi.org/10.1007/s11548-024-03317-6","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer remains one of the most prevalent cancers globally, necessitating effective early screening and diagnosis. This study investigates the effectiveness and generalizability of our recently proposed data augmentation technique, attention-guided erasing (AGE), across various transfer learning classification tasks for breast abnormality classification in mammography.</p><p><strong>Methods: </strong>AGE utilizes attention head visualizations from DINO self-supervised pretraining to weakly localize regions of interest (ROI) in images. These localizations are then used to stochastically erase non-essential background information from training images during transfer learning. Our research evaluates AGE across two image-level and three patch-level classification tasks. The image-level tasks involve breast density categorization in digital mammography (DM) and malignancy classification in contrast-enhanced mammography (CEM). Patch-level tasks include classifying calcifications and masses in scanned film mammography (SFM), as well as malignancy classification of ROIs in CEM.</p><p><strong>Results: </strong>AGE significantly boosts classification performance with statistically significant improvements in mean F1-scores across four tasks compared to baselines. Specifically, for image-level classification of breast density in DM and malignancy in CEM, we achieve gains of 2% and 1.5%, respectively. Additionally, for patch-level classification of calcifications in SFM and CEM ROIs, gains of 0.4% and 0.6% are observed, respectively. However, marginal improvement is noted in the mass classification task, indicating the necessity for further optimization in tasks where critical features may be obscured by erasing techniques.</p><p><strong>Conclusion: </strong>Our findings underscore the potential of AGE, a dataset- and task-specific augmentation strategy powered by self-supervised learning, to enhance the downstream classification performance of DL models, particularly involving ViTs, in medical imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karen Mys, Luke Visscher, Sara Lindenmann, Torsten Pastor, Paolo Antonacci, Matthias Knobe, Martin Jaeger, Simon Lambert, Peter Varga
{"title":"Shape-matching-based fracture reduction aid concept exemplified on the proximal humerus-a pilot study.","authors":"Karen Mys, Luke Visscher, Sara Lindenmann, Torsten Pastor, Paolo Antonacci, Matthias Knobe, Martin Jaeger, Simon Lambert, Peter Varga","doi":"10.1007/s11548-024-03318-5","DOIUrl":"https://doi.org/10.1007/s11548-024-03318-5","url":null,"abstract":"<p><strong>Purpose: </strong>Optimizing fracture reduction quality is key to achieve successful osteosynthesis, especially for epimetaphyseal regions such as the proximal humerus (PH), but can be challenging, partly due to the lack of a clear endpoint. We aimed to develop the prototype for a novel intraoperative C-arm-based aid to facilitate true anatomical reduction of fractures of the PH.</p><p><strong>Methods: </strong>The proposed method designates the reduced endpoint position of fragments by superimposing the outer boundary of the premorbid bone shape on intraoperative C-arm images, taking the mirrored intact contralateral PH from the preoperative CT scan as a surrogate. The accuracy of the algorithm was tested on 60 synthetic C-arm images created from the preoperative CT images of 20 complex PH fracture cases (Dataset A) and on 12 real C-arm images of a prefractured human anatomical specimen (Dataset B). The predicted outer boundary shape was compared with the known exact solution by (1) a calculated matching error and (2) two experienced shoulder trauma surgeons.</p><p><strong>Results: </strong>A prediction accuracy of 88% (with 73% 'good') was achieved according to the calculation method and an 87% accuracy (68% 'good') by surgeon assessment in Dataset A. Accuracy was 100% by both assessments for Dataset B.</p><p><strong>Conclusion: </strong>By seamlessly integrating into the standard perioperative workflow and imaging, the intuitive shape-matching-based aid, once developed as a medical device, has the potential to optimize the accuracy of the reduction of PH fractures while reducing the number of X-rays and surgery time. Further studies are required to demonstrate the applicability and efficacy of this method in optimizing fracture reduction quality.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery.","authors":"Ketai Chen, D S V Bandara, Jumpei Arata","doi":"10.1007/s11548-024-03306-9","DOIUrl":"https://doi.org/10.1007/s11548-024-03306-9","url":null,"abstract":"<p><strong>Purpose: </strong>This paper presents a deep learning approach to recognize and predict surgical activity in robot-assisted minimally invasive surgery (RAMIS). Our primary objective is to deploy the developed model for implementing a real-time surgical risk monitoring system within the realm of RAMIS.</p><p><strong>Methods: </strong>We propose a modified Transformer model with the architecture comprising no positional encoding, 5 fully connected layers, 1 encoder, and 3 decoders. This model is specifically designed to address 3 primary tasks in surgical robotics: gesture recognition, prediction, and end-effector trajectory prediction. Notably, it operates solely on kinematic data obtained from the joints of robotic arm.</p><p><strong>Results: </strong>The model's performance was evaluated on JHU-ISI Gesture and Skill Assessment Working Set dataset, achieving highest accuracy of 94.4% for gesture recognition, 84.82% for gesture prediction, and significantly low distance error of 1.34 mm with a prediction of 1 s in advance. Notably, the computational time per iteration was minimal recorded at only 4.2 ms.</p><p><strong>Conclusion: </strong>The results demonstrated the excellence of our proposed model compared to previous studies highlighting its potential for integration in real-time systems. We firmly believe that our model could significantly elevate realms of surgical activity recognition and prediction within RAS and make a substantial and meaningful contribution to the healthcare sector.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142973200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acknowledgement to reviewers.","authors":"","doi":"10.1007/s11548-024-03320-x","DOIUrl":"https://doi.org/10.1007/s11548-024-03320-x","url":null,"abstract":"","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of active learning algorithms in classifying head computed tomography reports using bidirectional encoder representations from transformers.","authors":"Tomohiro Wataya, Azusa Miura, Takahisa Sakisuka, Masahiro Fujiwara, Hisashi Tanaka, Yu Hiraoka, Junya Sato, Miyuki Tomiyama, Daiki Nishigaki, Kosuke Kita, Yuki Suzuki, Shoji Kido, Noriyuki Tomiyama","doi":"10.1007/s11548-024-03316-7","DOIUrl":"https://doi.org/10.1007/s11548-024-03316-7","url":null,"abstract":"<p><strong>Purpose: </strong>Systems equipped with natural language (NLP) processing can reduce missed radiological findings by physicians, but the annotation costs are burden in the development. This study aimed to compare the effects of active learning (AL) algorithms in NLP for estimating the significance of head computed tomography (CT) reports using bidirectional encoder representations from transformers (BERT).</p><p><strong>Methods: </strong>A total of 3728 head CT reports annotated with five categories of importance were used and UTH-BERT was adopted as the pre-trained BERT model. We assumed that 64% (2385 reports) of the data were initially in the unlabeled data pool (UDP), while the labeled data set (LD) used to train the model was empty. Twenty-five reports were repeatedly selected from the UDP and added to the LD, based on seven metrices: random sampling (RS: control), four uncertainty sampling (US) methods (least confidence (LC), margin sampling (MS), ratio of confidence (RC), and entropy sampling (ES)), and two distance-based sampling (DS) methods (cosine distance (CD) and Euclidian distance (ED)). The transition of accuracy of the model was evaluated using the test dataset.</p><p><strong>Results: </strong>The accuracy of the models with US was significantly higher than RS when reports in LD were < 1800, whereas DS methods were significantly lower than RS. Among the US methods, MS and RC were even better than the others. With the US methods, the required labeled data decreased by 15.4-40.5%, and most efficient in RC. In addition, in the US methods, data for minor categories tended to be added to LD earlier than RS and DS.</p><p><strong>Conclusions: </strong>In the classification task for the importance of head CT reports, US methods, especially RC and MS can lead to the effective fine-tuning of BERT models and reduce the imbalance of categories. AL can contribute to other studies on larger datasets by providing effective annotation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}