{"title":"High-quality semi-supervised anomaly detection with generative adversarial networks.","authors":"Yuki Sato, Junya Sato, Noriyuki Tomiyama, Shoji Kido","doi":"10.1007/s11548-023-03031-9","DOIUrl":"10.1007/s11548-023-03031-9","url":null,"abstract":"<p><strong>Purpose: </strong>The visualization of an anomaly area is easier in anomaly detection methods that use generative models rather than classification models. However, achieving both anomaly detection accuracy and a clear visualization of anomalous areas is challenging. This study aimed to establish a method that combines both detection accuracy and clear visualization of anomalous areas using a generative adversarial network (GAN).</p><p><strong>Methods: </strong>In this study, StyleGAN2 with adaptive discriminator augmentation (StyleGAN2-ADA), which can generate high-resolution and high-quality images with limited number of datasets, was used as the image generation model, and pixel-to-style-to-pixel (pSp) encoder was used to convert images into intermediate latent variables. We combined existing methods for training and proposed a method for calculating anomaly scores using intermediate latent variables. The proposed method, which combines these two methods, is called high-quality anomaly GAN (HQ-AnoGAN).</p><p><strong>Results: </strong>The experimental results obtained using three datasets demonstrated that HQ-AnoGAN has equal or better detection accuracy than the existing methods. The results of the visualization of abnormal areas using the generated images showed that HQ-AnoGAN could generate more natural images than the existing methods and was qualitatively more accurate in the visualization of abnormal areas.</p><p><strong>Conclusion: </strong>In this study, HQ-AnoGAN comprising StyleGAN2-ADA and pSp encoder was proposed with an optimal anomaly score calculation method. The experimental results show that HQ-AnoGAN can achieve both high abnormality detection accuracy and clear visualization of abnormal areas; thus, HQ-AnoGAN demonstrates significant potential for application in medical imaging diagnosis cases where an explanation of diagnosis is required.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2121-2131"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71523347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller-Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, Nicolas Padoy
{"title":"Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery.","authors":"Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller-Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, Nicolas Padoy","doi":"10.1007/s11548-024-03166-3","DOIUrl":"10.1007/s11548-024-03166-3","url":null,"abstract":"<p><strong>Purpose: </strong>Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers.</p><p><strong>Methods: </strong>In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70.</p><p><strong>Results: </strong>The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)).</p><p><strong>Conclusion: </strong>MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2249-2257"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning-based osteochondritis dissecans detection in ultrasound images with humeral capitellum localization.","authors":"Kenta Sasaki, Daisuke Fujita, Kenta Takatsuji, Yoshihiro Kotoura, Masataka Minami, Yusuke Kobayashi, Tsuyoshi Sukenari, Yoshikazu Kida, Kenji Takahashi, Syoji Kobashi","doi":"10.1007/s11548-023-03040-8","DOIUrl":"10.1007/s11548-023-03040-8","url":null,"abstract":"<p><strong>Purpose: </strong>Osteochondritis dissecans (OCD) of the humeral capitellum is a common cause of elbow disorders, particularly among young throwing athletes. Conservative treatment is the preferred treatment for managing OCD, and early intervention significantly influences the possibility of complete disease resolution. The purpose of this study is to develop a deep learning-based classification model in ultrasound images for computer-aided diagnosis.</p><p><strong>Methods: </strong>This paper proposes a deep learning-based OCD classification method in ultrasound images. The proposed method first detects the humeral capitellum detection using YOLO and then estimates the OCD probability of the detected region probability using VGG16. We hypothesis that the performance will be improved by eliminating unnecessary regions. To validate the performance of the proposed method, it was applied to 158 subjects (OCD: 67, Normal: 91) using five-fold-cross-validation.</p><p><strong>Results: </strong>The study demonstrated that the humeral capitellum detection achieved a mean average precision (mAP) of over 0.95, while OCD probability estimation achieved an average accuracy of 0.890, precision of 0.888, recall of 0.927, F1 score of 0.894, and an area under the curve (AUC) of 0.962. On the other hand, when the classification model was constructed for the entire image, accuracy, precision, recall, F1 score, and AUC were 0.806, 0.806, 0.932, 0.843, and 0.928, respectively. The findings suggest the high-performance potential of the proposed model for OCD classification in ultrasonic images.</p><p><strong>Conclusion: </strong>This paper introduces a deep learning-based OCD classification method. The experimental results emphasize the effectiveness of focusing on the humeral capitellum for OCD classification in ultrasound images. Future work should involve evaluating the effectiveness of employing the proposed method by physicians during medical check-ups for OCD.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2143-2152"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541362/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139486838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marzieh Ershad Langroodi, Xi Liu, Mark R Tousignant, Anthony M Jarc
{"title":"Objective performance indicators versus GEARS: an opportunity for more accurate assessment of surgical skill.","authors":"Marzieh Ershad Langroodi, Xi Liu, Mark R Tousignant, Anthony M Jarc","doi":"10.1007/s11548-024-03248-2","DOIUrl":"10.1007/s11548-024-03248-2","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical skill evaluation that relies on subjective scoring of surgical videos can be time-consuming and inconsistent across raters. We demonstrate differentiated opportunities for objective evaluation to improve surgeon training and performance.</p><p><strong>Methods: </strong>Subjective evaluation was performed using the Global evaluative assessment of robotic skills (GEARS) from both expert and crowd raters; whereas, objective evaluation used objective performance indicators (OPIs) derived from da Vinci surgical systems. Classifiers were trained for each evaluation method to distinguish between surgical expertise levels. This study includes one clinical task from a case series of robotic-assisted sleeve gastrectomy procedures performed by a single surgeon, and two training tasks performed by novice and expert surgeons, i.e., surgeons with no experience in robotic-assisted surgery (RAS) and those with more than 500 RAS procedures.</p><p><strong>Results: </strong>When comparing expert and novice skill levels, OPI-based classifier showed significantly higher accuracy than GEARS-based classifier on the more complex dissection task (OPI 0.93 ± 0.08 vs. GEARS 0.67 ± 0.18; 95% CI, 0.16-0.37; p = 0.02), but no significant difference was shown on the simpler suturing task. For the single-surgeon case series, both classifiers performed well when differentiating between early and late group cases with smaller group sizes and larger intervals between groups (OPI 0.9 ± 0.08; GEARS 0.87 ± 0.12; 95% CI, 0.02-0.04; p = 0.67). When increasing the group size to include more cases, thereby having smaller intervals between groups, OPIs demonstrated significantly higher accuracy (OPI 0.97 ± 0.06; GEARS 0.76 ± 0.07; 95% CI, 0.12-0.28; p = 0.004) in differentiating between the early/late cases.</p><p><strong>Conclusions: </strong>Objective methods for skill evaluation in RAS outperform subjective methods when (1) differentiating expertise in a technically challenging training task, and (2) identifying more granular differences along early versus late phases of a surgeon learning curve within a clinical task. Objective methods offer an opportunity for more accessible and scalable skill evaluation in RAS.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2259-2267"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E Cramer, A B Kucharski, J Kreimeier, S Andreß, S Li, C Walk, F Merkl, J Högl, P Wucherer, P Stefan, R von Eisenhart-Rothe, P Enste, D Roth
{"title":"Requirement analysis for an AI-based AR assistance system for surgical tools in the operating room: stakeholder requirements and technical perspectives.","authors":"E Cramer, A B Kucharski, J Kreimeier, S Andreß, S Li, C Walk, F Merkl, J Högl, P Wucherer, P Stefan, R von Eisenhart-Rothe, P Enste, D Roth","doi":"10.1007/s11548-024-03193-0","DOIUrl":"10.1007/s11548-024-03193-0","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to investigate the integration of augmented reality (AR) within the context of increasingly complex surgical procedures and instrument handling toward the transition to smart operating rooms (OR). In contrast to cumbersome paper-based surgical instrument manuals still used in the OR, we wish to provide surgical staff with an AR head-mounted display that provides in-situ visualization and guidance throughout the assembly process of surgical instruments. Our requirement analysis supports the development and provides guidelines for its transfer into surgical practice.</p><p><strong>Methods: </strong>A three-phase user-centered design approach was applied with online interviews, an observational study, and a workshop with two focus groups with scrub nurses, circulating nurses, surgeons, manufacturers, clinic IT staff, and members of the sterilization department. The requirement analysis was based on key criteria for usability. The data were analyzed via structured content analysis.</p><p><strong>Results: </strong>We identified twelve main problems with the current use of paper manuals. Major issues included sterile users' inability to directly handle non-sterile manuals, missing details, and excessive text information, potentially delaying procedure performance. Major requirements for AR-driven guidance fall into the categories of design, practicability, control, and integration into the current workflow. Additionally, further recommendations for technical development could be obtained.</p><p><strong>Conclusion: </strong>In conclusion, our insights have outlined a comprehensive spectrum of requirements that are essential for the successful implementation of an AI- and AR-driven guidance for assembling surgical instruments. The consistently appreciative evaluation by stakeholders underscores the profound potential of AR and AI technology as valuable assistance and guidance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2287-2296"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541324/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Josefine Schreiter, Florian Heinrich, Benjamin Hatscher, Danny Schott, Christian Hansen
{"title":"Multimodal human-computer interaction in interventional radiology and surgery: a systematic literature review.","authors":"Josefine Schreiter, Florian Heinrich, Benjamin Hatscher, Danny Schott, Christian Hansen","doi":"10.1007/s11548-024-03263-3","DOIUrl":"https://doi.org/10.1007/s11548-024-03263-3","url":null,"abstract":"<p><strong>Purpose: </strong>As technology advances, more research dedicated to medical interactive systems emphasizes the integration of touchless and multimodal interaction (MMI). Particularly in surgical and interventional settings, this approach is advantageous because it maintains sterility and promotes a natural interaction. Past reviews have focused on investigating MMI in terms of technology and interaction with robots. However, none has put particular emphasis on analyzing these kind of interactions for surgical and interventional scenarios.</p><p><strong>Methods: </strong>Two databases were included in the query to search for relevant publications within the past 10 years. After identification, two screening steps followed which included eligibility criteria. A forward/backward search was added to identify more relevant publications. The analysis incorporated the clustering of references in terms of addressed medical field, input and output modalities, and challenges regarding the development and evaluation.</p><p><strong>Results: </strong>A sample of 31 references was obtained (16 journal articles, 15 conference papers). MMI was predominantly developed for laparoscopy and radiology and interaction with image viewers. The majority implemented two input modalities, with voice-hand interaction being the most common combination-voice for discrete and hand for continuous navigation tasks. The application of gaze, body, and facial control is minimal, primarily because of ergonomic concerns. Feedback was included in 81% publications, of which visual cues were most often applied.</p><p><strong>Conclusion: </strong>This work systematically reviews MMI for surgical and interventional scenarios over the past decade. In future research endeavors, we propose an enhanced focus on conducting in-depth analyses of the considered use cases and the application of standardized evaluation methods. Moreover, insights from various sectors, including but not limited to the gaming sector, should be exploited.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Zhang, Kui Chen, Yuhang Yao, Bo Wu, Qiang Li, Zheming Zhang, Peihua Fan, Wei Wang, Manxia Lin, Masakatsu G Fujie
{"title":"Semi-automatic robotic puncture system based on deformable soft tissue point cloud registration.","authors":"Bo Zhang, Kui Chen, Yuhang Yao, Bo Wu, Qiang Li, Zheming Zhang, Peihua Fan, Wei Wang, Manxia Lin, Masakatsu G Fujie","doi":"10.1007/s11548-024-03247-3","DOIUrl":"https://doi.org/10.1007/s11548-024-03247-3","url":null,"abstract":"<p><strong>Purpose: </strong>Traditional surgical puncture robot systems based on computed tomography (CT) and infrared camera guidance have natural disadvantages for puncture of deformable soft tissues such as the liver. Liver movement and deformation caused by breathing are difficult to accurately assess and compensate by current technical solutions. We propose a semi-automatic robotic puncture system based on real-time ultrasound images to solve this problem.</p><p><strong>Method: </strong>Real-time ultrasound images and their spatial position information can be obtained by robot in this system. By recognizing target tissue in these ultrasound images and using reconstruction algorithm, 3D real-time ultrasound tissue point cloud can be constructed. Point cloud of the target tissue in the CT image can be obtained by using developed software. Through the point cloud registration method based on feature points, two point clouds above are registered. The puncture target will be automatically positioned, then robot quickly carries the puncture guide mechanism to the puncture site and guides the puncture. It takes about just tens of seconds from the start of image acquisition to completion of needle insertion. Patient can be controlled by a ventilator to temporarily stop breathing, and patient's breathing state does not need to be the same as taking CT scan.</p><p><strong>Results: </strong>The average operation time of 24 phantom experiments is 64.5 s, and the average error between the needle tip and the target point after puncture is 0.8 mm. Two animal puncture surgeries were performed, and the results indicated that the puncture errors of these two experiments are 1.76 mm and 1.81 mm, respectively.</p><p><strong>Conclusion: </strong>Robot system can effectively carry out and implement liver tissue puncture surgery, and the success rate of phantom experiments and experiments is 100%. It also shows that the puncture robot system has high puncture accuracy, short operation time, and great clinical value.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time ultrasound AR 3D visualization toward better topological structure perception for hepatobiliary surgery.","authors":"Yuqi Ji, Tianqi Huang, Yutong Wu, Ruiyang Li, Pengfei Wang, Jiahong Dong, Honegen Liao","doi":"10.1007/s11548-024-03273-1","DOIUrl":"https://doi.org/10.1007/s11548-024-03273-1","url":null,"abstract":"<p><strong>Purpose: </strong>Ultrasound serves as a crucial intraoperative imaging tool for hepatobiliary surgeons, enabling the identification of complex anatomical structures like blood vessels, bile ducts, and lesions. However, the reliance on manual mental reconstruction of 3D topologies from 2D ultrasound images presents significant challenges, leading to a pressing need for tools to assist surgeons with real-time identification of 3D topological anatomy.</p><p><strong>Methods: </strong>We propose a real-time ultrasound AR 3D visualization method for intraoperative 2D ultrasound imaging. Our system leverages backward alpha blending to integrate multi-planar ultrasound data effectively. To ensure continuity between 2D ultrasound planes, we employ spatial smoothing techniques to interpolate the widely spaced ultrasound planes. A dynamic 3D transfer function is also developed to enhance spatial representation through color differentiation.</p><p><strong>Results: </strong>Comparative experiments involving our AR visualization of 3D ultrasound, alongside AR visualization of 2D ultrasound and 2D visualization of 3D ultrasound, demonstrated that the proposed method significantly reduced operational time(110.25 ± 27.83 s compared to 292 ± 146.63 s and 365.25 ± 131.62 s), improved depth perception and comprehension of complex topologies, contributing to reduced pressure and increased personal satisfaction among users.</p><p><strong>Conclusion: </strong>Quantitative experimental results and feedback from both novice and experienced physicians highlight our system's exceptional ability to enhance the understanding of complex topological anatomy. This improvement is crucial for accurate ultrasound diagnosis and informed surgical decision-making, underscoring the system's clinical applicability.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Srdjan Milosavljevic, Zoltan Bardosi, Yusuf Oezbek, Wolfgang Freysinger
{"title":"Adaptive infrared patterns for microscopic surface reconstructions.","authors":"Srdjan Milosavljevic, Zoltan Bardosi, Yusuf Oezbek, Wolfgang Freysinger","doi":"10.1007/s11548-024-03242-8","DOIUrl":"https://doi.org/10.1007/s11548-024-03242-8","url":null,"abstract":"<p><strong>Purpose: </strong>Multi-zoom microscopic surface reconstructions of operating sites, especially in ENT surgeries, would allow multimodal image fusion for determining the amount of resected tissue, for recognizing critical structures, and novel tools for intraoperative quality assurance. State-of-the-art three-dimensional model creation of the surgical scene is challenged by the surgical environment, illumination, and the homogeneous structures of skin, muscle, bones, etc., that lack invariant features for stereo reconstruction.</p><p><strong>Methods: </strong>An adaptive near-infrared pattern projector illuminates the surgical scene with optimized patterns to yield accurate dense multi-zoom stereoscopic surface reconstructions. The approach does not impact the clinical workflow. The new method is compared to state-of-the-art approaches and is validated by determining its reconstruction errors relative to a high-resolution 3D-reconstruction of CT data.</p><p><strong>Results: </strong>200 surface reconstructions were generated for 5 zoom levels with 10 reconstructions for each object illumination method (standard operating room light, microscope light, random pattern and adaptive NIR pattern). For the adaptive pattern, the surface reconstruction errors ranged from 0.5 to 0.7 mm, as compared to 1-1.9 mm for the other approaches. The local reconstruction differences are visualized in heat maps.</p><p><strong>Conclusion: </strong>Adaptive near-infrared (NIR) pattern projection in microscopic surgery allows dense and accurate microscopic surface reconstructions for variable zoom levels of small and homogeneous surfaces. This could potentially aid in microscopic interventions at the lateral skull base and potentially open up new possibilities for combining quantitative intraoperative surface reconstructions with preoperative radiologic imagery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}