International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Computer-aided design and fabrication of nasal prostheses: a semi-automated algorithm using statistical shape modeling. 计算机辅助设计和制造鼻假体:使用统计形状建模的半自动化算法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-06-06 DOI: 10.1007/s11548-024-03206-y
T Bannink, M de Ridder, S Bouman, M J A van Alphen, R L P van Veen, M W M van den Brekel, M B Karakullukçu
{"title":"Computer-aided design and fabrication of nasal prostheses: a semi-automated algorithm using statistical shape modeling.","authors":"T Bannink, M de Ridder, S Bouman, M J A van Alphen, R L P van Veen, M W M van den Brekel, M B Karakullukçu","doi":"10.1007/s11548-024-03206-y","DOIUrl":"10.1007/s11548-024-03206-y","url":null,"abstract":"<p><strong>Purpose: </strong>This research aimed to develop an innovative method for designing and fabricating nasal prostheses that reduces anaplastologist expertise dependency while maintaining quality and appearance, allowing patients to regain their normal facial appearance.</p><p><strong>Methods: </strong>The method involved statistical shape modeling using a morphable face model and 3D data acquired through optical scanning or CT. An automated design process generated patient-specific fits and appearances using regular prosthesis materials and 3D printing of molds. Manual input was required for specific case-related details.</p><p><strong>Results: </strong>The developed method met all predefined requirements, replacing analog impression-making and offering compatibility with various data acquisition methods. Prostheses created through this method exhibited equivalent aesthetics to conventionally fabricated ones while reducing the skill dependency typically associated with prosthetic design and fabrication.</p><p><strong>Conclusions: </strong>This method provides a promising approach for both temporary and definitive nasal prostheses, with the potential for remote prosthesis fabrication in areas lacking anaplastology care. While new skills are required for data acquisition and algorithm control, these technologies are increasingly accessible. Further clinical studies will help validate its effectiveness, and ongoing technological advancements may lead to even more advanced and skill-independent prosthesis fabrication methods in the future.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2279-2285"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preliminary study of substantia nigra analysis by tensorial feature extraction. 通过张量特征提取进行黑质分析的初步研究
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-06-27 DOI: 10.1007/s11548-024-03175-2
Hayato Itoh, Masahiro Oda, Shinji Saiki, Koji Kamagata, Wataru Sako, Kei-Ichi Ishikawa, Nobutaka Hattori, Shigeki Aoki, Kensaku Mori
{"title":"Preliminary study of substantia nigra analysis by tensorial feature extraction.","authors":"Hayato Itoh, Masahiro Oda, Shinji Saiki, Koji Kamagata, Wataru Sako, Kei-Ichi Ishikawa, Nobutaka Hattori, Shigeki Aoki, Kensaku Mori","doi":"10.1007/s11548-024-03175-2","DOIUrl":"10.1007/s11548-024-03175-2","url":null,"abstract":"<p><strong>Purpose: </strong>Parkinson disease (PD) is a common progressive neurodegenerative disorder in our ageing society. Early-stage PD biomarkers are desired for timely clinical intervention and understanding of pathophysiology. Since one of the characteristics of PD is the progressive loss of dopaminergic neurons in the substantia nigra pars compacta, we propose a feature extraction method for analysing the differences in the substantia nigra between PD and non-PD patients.</p><p><strong>Method: </strong>We propose a feature-extraction method for volumetric images based on a rank-1 tensor decomposition. Furthermore, we apply a feature selection method that excludes common features between PD and non-PD. We collect neuromelanin images of 263 patients: 124 PD and 139 non-PD patients and divide them into training and testing datasets for experiments. We then experimentally evaluate the classification accuracy of the substantia nigra between PD and non-PD patients using the proposed feature extraction method and linear discriminant analysis.</p><p><strong>Results: </strong>The proposed method achieves a sensitivity of 0.72 and a specificity of 0.64 for our testing dataset of 66 non-PD and 42 PD patients. Furthermore, we visualise the important patterns in the substantia nigra by a linear combination of rank-1 tensors with selected features. The visualised patterns include the ventrolateral tier, where the severe loss of neurons can be observed in PD.</p><p><strong>Conclusions: </strong>We develop a new feature-extraction method for the analysis of the substantia nigra towards PD diagnosis. In the experiments, even though the classification accuracy with the proposed feature extraction method and linear discriminant analysis is lower than that of expert physicians, the results suggest the potential of tensorial feature extraction.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2133-2142"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141460645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aortic roadmapping during EVAR: a combined FEM-EM tracking feasibility study. EVAR 期间的主动脉路线图:FEM-EM 联合追踪可行性研究。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-06-02 DOI: 10.1007/s11548-024-03187-y
Monica Emendi, Geir A Tangen, Pierluigi Di Giovanni, Håvard Ulsaker, Reidar Brekken, Frode Manstad-Hulaas, Victorien Prot, Aline Bel-Brunon, Karen H Støverud
{"title":"Aortic roadmapping during EVAR: a combined FEM-EM tracking feasibility study.","authors":"Monica Emendi, Geir A Tangen, Pierluigi Di Giovanni, Håvard Ulsaker, Reidar Brekken, Frode Manstad-Hulaas, Victorien Prot, Aline Bel-Brunon, Karen H Støverud","doi":"10.1007/s11548-024-03187-y","DOIUrl":"10.1007/s11548-024-03187-y","url":null,"abstract":"<p><strong>Purpose: </strong>Currently, the intra-operative visualization of vessels during endovascular aneurysm repair (EVAR) relies on contrast-based imaging modalities. Moreover, traditional image fusion techniques lack a continuous and automatic update of the vessel configuration, which changes due to the insertion of stiff guidewires. The purpose of this work is to develop and evaluate a novel approach to improve image fusion, that takes into account the deformations, combining electromagnetic (EM) tracking technology and finite element modeling (FEM).</p><p><strong>Methods: </strong>To assess whether EM tracking can improve the prediction of the numerical simulations, a patient-specific model of abdominal aorta was segmented and manufactured. A database of simulations with different insertion angles was created. Then, an ad hoc sensorized tool with three embedded EM sensors was designed, enabling tracking of the sensors' positions during the insertion phase. Finally, the corresponding cone beam computed tomography (CBCT) images were acquired and processed to obtain the ground truth aortic deformations of the manufactured model.</p><p><strong>Results: </strong>Among the simulations in the database, the one minimizing the in silico versus in vitro discrepancy in terms of sensors' positions gave the most accurate aortic displacement results.</p><p><strong>Conclusions: </strong>The proposed approach suggests that the EM tracking technology could be used not only to follow the tool, but also to minimize the error in the predicted aortic roadmap, thus paving the way for a safer EVAR navigation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2239-2247"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141186916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos. 分析人体组织和手术工具对第一人称手术视频中工作流程识别的影响。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-02-27 DOI: 10.1007/s11548-024-03074-6
Hisako Tomita, Naoto Ienaga, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto
{"title":"An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos.","authors":"Hisako Tomita, Naoto Ienaga, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto","doi":"10.1007/s11548-024-03074-6","DOIUrl":"10.1007/s11548-024-03074-6","url":null,"abstract":"<p><strong>Purpose: </strong>Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools.</p><p><strong>Methods: </strong>We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools.</p><p><strong>Results: </strong>The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline.</p><p><strong>Conclusion: </strong>In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2195-2202"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541397/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139974449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Background removal for debiasing computer-aided cytological diagnosis. 为计算机辅助细胞学诊断去除背景杂质。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-06-25 DOI: 10.1007/s11548-024-03169-0
Keita Takeda, Tomoya Sakai, Eiji Mitate
{"title":"Background removal for debiasing computer-aided cytological diagnosis.","authors":"Keita Takeda, Tomoya Sakai, Eiji Mitate","doi":"10.1007/s11548-024-03169-0","DOIUrl":"10.1007/s11548-024-03169-0","url":null,"abstract":"<p><p>To address the background-bias problem in computer-aided cytology caused by microscopic slide deterioration, this article proposes a deep learning approach for cell segmentation and background removal without requiring cell annotation. A U-Net-based model was trained to separate cells from the background in an unsupervised manner by leveraging the redundancy of the background and the sparsity of cells in liquid-based cytology (LBC) images. The experimental results demonstrate that the U-Net-based model trained on a small set of cytology images can exclude background features and accurately segment cells. This capability is beneficial for debiasing in the detection and classification of the cells of interest in oral LBC. Slide deterioration can significantly affect deep learning-based cell classification. Our proposed method effectively removes background features at no cost of cell annotation, thereby enabling accurate cytological diagnosis through the deep learning of microscopic slide images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2165-2174"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid representation-enhanced sampling for Bayesian active learning in musculoskeletal segmentation of lower extremities. 贝叶斯主动学习下肢肌肉骨骼分割中的混合表示增强采样。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-01-29 DOI: 10.1007/s11548-024-03065-7
Ganping Li, Yoshito Otake, Mazen Soufi, Masashi Taniguchi, Masahide Yagi, Noriaki Ichihashi, Keisuke Uemura, Masaki Takao, Nobuhiko Sugano, Yoshinobu Sato
{"title":"Hybrid representation-enhanced sampling for Bayesian active learning in musculoskeletal segmentation of lower extremities.","authors":"Ganping Li, Yoshito Otake, Mazen Soufi, Masashi Taniguchi, Masahide Yagi, Noriaki Ichihashi, Keisuke Uemura, Masaki Takao, Nobuhiko Sugano, Yoshinobu Sato","doi":"10.1007/s11548-024-03065-7","DOIUrl":"10.1007/s11548-024-03065-7","url":null,"abstract":"<p><strong>Purpose: </strong>Manual annotations for training deep learning models in auto-segmentation are time-intensive. This study introduces a hybrid representation-enhanced sampling strategy that integrates both density and diversity criteria within an uncertainty-based Bayesian active learning (BAL) framework to reduce annotation efforts by selecting the most informative training samples.</p><p><strong>Methods: </strong>The experiments are performed on two lower extremity datasets of MRI and CT images, focusing on the segmentation of the femur, pelvis, sacrum, quadriceps femoris, hamstrings, adductors, sartorius, and iliopsoas, utilizing a U-net-based BAL framework. Our method selects uncertain samples with high density and diversity for manual revision, optimizing for maximal similarity to unlabeled instances and minimal similarity to existing training data. We assess the accuracy and efficiency using dice and a proposed metric called reduced annotation cost (RAC), respectively. We further evaluate the impact of various acquisition rules on BAL performance and design an ablation study for effectiveness estimation.</p><p><strong>Results: </strong>In MRI and CT datasets, our method was superior or comparable to existing ones, achieving a 0.8% dice and 1.0% RAC increase in CT (statistically significant), and a 0.8% dice and 1.1% RAC increase in MRI (not statistically significant) in volume-wise acquisition. Our ablation study indicates that combining density and diversity criteria enhances the efficiency of BAL in musculoskeletal segmentation compared to using either criterion alone.</p><p><strong>Conclusion: </strong>Our sampling method is proven efficient in reducing annotation costs in image segmentation tasks. The combination of the proposed method and our BAL framework provides a semi-automatic way for efficient annotation of medical image datasets.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2177-2186"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139571189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain transformation using semi-supervised CycleGAN for improving performance of classifying thyroid tissue images. 利用半监督 CycleGAN 进行领域转换,提高甲状腺组织图像的分类性能。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-01-18 DOI: 10.1007/s11548-024-03061-x
Yoshihito Ichiuji, Shingo Mabu, Satomi Hatta, Kunihiro Inai, Shohei Higuchi, Shoji Kido
{"title":"Domain transformation using semi-supervised CycleGAN for improving performance of classifying thyroid tissue images.","authors":"Yoshihito Ichiuji, Shingo Mabu, Satomi Hatta, Kunihiro Inai, Shohei Higuchi, Shoji Kido","doi":"10.1007/s11548-024-03061-x","DOIUrl":"10.1007/s11548-024-03061-x","url":null,"abstract":"<p><strong>Purpose: </strong>A large number of research has been conducted on the classification of medical images using deep learning. The thyroid tissue images can be also classified by cancer types. Deep learning requires a large amount of data, but every medical institution cannot collect sufficient number of data for deep learning. In that case, we can consider a case where a classifier trained at a certain medical institution that has a sufficient number of data is reused at other institutions. However, when using data from multiple institutions, it is necessary to unify the feature distribution because the feature of the data differs due to differences in data acquisition conditions.</p><p><strong>Methods: </strong>To unify the feature distribution, the data from Institution T are transformed to have the closer distribution to that from Institution S by applying a domain transformation using semi-supervised CycleGAN. The proposed method enhances CycleGAN considering the feature distribution of classes for making appropriate domain transformation for classification. In addition, to address the problem of imbalanced data with different numbers of data for each cancer type, several methods dealing with imbalanced data are applied to semi-supervised CycleGAN.</p><p><strong>Results: </strong>The experimental results showed that the classification performance was enhanced when the dataset from Institution S was used as training data and the testing dataset from Institution T was classified after applying domain transformation. In addition, focal loss contributed to improving the mean F1 score the best as a method that addresses the class imbalance.</p><p><strong>Conclusion: </strong>The proposed method achieved the domain transformation of thyroid tissue images between two domains, where it retained the important features related to the classes across domains and showed the best F1 score with significant differences compared with other methods. In addition, the proposed method was further enhanced by addressing the class imbalance of the dataset.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2153-2163"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139492884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based automatic pipeline for 3D needle localization on intra-procedural 3D MRI. 基于深度学习的自动管道,用于术中三维磁共振成像的三维针定位。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-03-23 DOI: 10.1007/s11548-024-03077-3
Wenqi Zhou, Xinzhou Li, Fatemeh Zabihollahy, David S Lu, Holden H Wu
{"title":"Deep learning-based automatic pipeline for 3D needle localization on intra-procedural 3D MRI.","authors":"Wenqi Zhou, Xinzhou Li, Fatemeh Zabihollahy, David S Lu, Holden H Wu","doi":"10.1007/s11548-024-03077-3","DOIUrl":"10.1007/s11548-024-03077-3","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate and rapid needle localization on 3D magnetic resonance imaging (MRI) is critical for MRI-guided percutaneous interventions. The current workflow requires manual needle localization on 3D MRI, which is time-consuming and cumbersome. Automatic methods using 2D deep learning networks for needle segmentation require manual image plane localization, while 3D networks are challenged by the need for sufficient training datasets. This work aimed to develop an automatic deep learning-based pipeline for accurate and rapid 3D needle localization on in vivo intra-procedural 3D MRI using a limited training dataset.</p><p><strong>Methods: </strong>The proposed automatic pipeline adopted Shifted Window (Swin) Transformers and employed a coarse-to-fine segmentation strategy: (1) initial 3D needle feature segmentation with 3D Swin UNEt TRansfomer (UNETR); (2) generation of a 2D reformatted image containing the needle feature; (3) fine 2D needle feature segmentation with 2D Swin Transformer and calculation of 3D needle tip position and axis orientation. Pre-training and data augmentation were performed to improve network training. The pipeline was evaluated via cross-validation with 49 in vivo intra-procedural 3D MR images from preclinical pig experiments. The needle tip and axis localization errors were compared with human intra-reader variation using the Wilcoxon signed rank test, with p < 0.05 considered significant.</p><p><strong>Results: </strong>The average end-to-end computational time for the pipeline was 6 s per 3D volume. The median Dice scores of the 3D Swin UNETR and 2D Swin Transformer in the pipeline were 0.80 and 0.93, respectively. The median 3D needle tip and axis localization errors were 1.48 mm (1.09 pixels) and 0.98°, respectively. Needle tip localization errors were significantly smaller than human intra-reader variation (median 1.70 mm; p < 0.01).</p><p><strong>Conclusion: </strong>The proposed automatic pipeline achieved rapid pixel-level 3D needle localization on intra-procedural 3D MRI without requiring a large 3D training dataset and has the potential to assist MRI-guided percutaneous interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2227-2237"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140195078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-quality semi-supervised anomaly detection with generative adversarial networks. 使用生成对抗性网络进行高质量的半监督异常检测。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2023-11-09 DOI: 10.1007/s11548-023-03031-9
Yuki Sato, Junya Sato, Noriyuki Tomiyama, Shoji Kido
{"title":"High-quality semi-supervised anomaly detection with generative adversarial networks.","authors":"Yuki Sato, Junya Sato, Noriyuki Tomiyama, Shoji Kido","doi":"10.1007/s11548-023-03031-9","DOIUrl":"10.1007/s11548-023-03031-9","url":null,"abstract":"<p><strong>Purpose: </strong>The visualization of an anomaly area is easier in anomaly detection methods that use generative models rather than classification models. However, achieving both anomaly detection accuracy and a clear visualization of anomalous areas is challenging. This study aimed to establish a method that combines both detection accuracy and clear visualization of anomalous areas using a generative adversarial network (GAN).</p><p><strong>Methods: </strong>In this study, StyleGAN2 with adaptive discriminator augmentation (StyleGAN2-ADA), which can generate high-resolution and high-quality images with limited number of datasets, was used as the image generation model, and pixel-to-style-to-pixel (pSp) encoder was used to convert images into intermediate latent variables. We combined existing methods for training and proposed a method for calculating anomaly scores using intermediate latent variables. The proposed method, which combines these two methods, is called high-quality anomaly GAN (HQ-AnoGAN).</p><p><strong>Results: </strong>The experimental results obtained using three datasets demonstrated that HQ-AnoGAN has equal or better detection accuracy than the existing methods. The results of the visualization of abnormal areas using the generated images showed that HQ-AnoGAN could generate more natural images than the existing methods and was qualitatively more accurate in the visualization of abnormal areas.</p><p><strong>Conclusion: </strong>In this study, HQ-AnoGAN comprising StyleGAN2-ADA and pSp encoder was proposed with an optimal anomaly score calculation method. The experimental results show that HQ-AnoGAN can achieve both high abnormality detection accuracy and clear visualization of abnormal areas; thus, HQ-AnoGAN demonstrates significant potential for application in medical imaging diagnosis cases where an explanation of diagnosis is required.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2121-2131"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71523347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery. 多中心泛化的挑战:Roux-en-Y 胃旁路手术中的阶段和步骤识别。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-05-18 DOI: 10.1007/s11548-024-03166-3
Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller-Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, Nicolas Padoy
{"title":"Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery.","authors":"Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller-Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, Nicolas Padoy","doi":"10.1007/s11548-024-03166-3","DOIUrl":"10.1007/s11548-024-03166-3","url":null,"abstract":"<p><strong>Purpose: </strong>Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers.</p><p><strong>Methods: </strong>In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70.</p><p><strong>Results: </strong>The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)).</p><p><strong>Conclusion: </strong>MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2249-2257"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信