J J Peek, X Zhang, K Hildebrandt, S A Max, A H Sadeghi, A J J C Bogers, E A F Mahtab
{"title":"A novel 3D image registration technique for augmented reality vision in minimally invasive thoracoscopic pulmonary segmentectomy.","authors":"J J Peek, X Zhang, K Hildebrandt, S A Max, A H Sadeghi, A J J C Bogers, E A F Mahtab","doi":"10.1007/s11548-024-03308-7","DOIUrl":"https://doi.org/10.1007/s11548-024-03308-7","url":null,"abstract":"<p><strong>Purpose: </strong>In this feasibility study, we aimed to create a dedicated pulmonary augmented reality (AR) workflow to enable a semi-automated intraoperative overlay of the pulmonary anatomy during video-assisted thoracoscopic surgery (VATS) or robot-assisted thoracoscopic surgery (RATS).</p><p><strong>Methods: </strong>Initially, the stereoscopic cameras were calibrated to obtain the intrinsic camera parameters. Intraoperatively, stereoscopic images were recorded and a 3D point cloud was generated from these images. By manually selecting the bifurcation key points, the 3D segmentation (from the diagnostic CT scan) was registered onto the intraoperative 3D point cloud.</p><p><strong>Results: </strong>Image reprojection errors were 0.34 and 0.22 pixels for the VATS and RATS cameras, respectively. We created disparity maps and point clouds for all eight patients. Time for creation of the 3D AR overlay was 5 min. Validation of the point clouds was performed, resulting in a median absolute error of 0.20 mm [IQR 0.10-0.54]. We were able to visualize the AR overlay and identify the arterial bifurcations adequately for five patients. In addition to creating AR overlays of the visible or invisible structures intraoperatively, we successfully visualized branch labels and altered the transparency of the overlays.</p><p><strong>Conclusion: </strong>An algorithm was developed transforming the operative field into a 3D point cloud surface. This allowed for an accurate registration and visualization of preoperative 3D models. Using this system, surgeons can navigate through the patient's anatomy intraoperatively, especially during crucial moments, by visualizing otherwise invisible structures. This proposed registration method lays the groundwork for automated intraoperative AR navigation during minimally invasive pulmonary resections.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FEM simulation of breast deformation with semi-fluid representation.","authors":"Shota Takahashi, Hiroshi Fujimoto, Katsuhiro Nasu, Toshiya Nakaguchi, Naoto Ienaga, Yoshihiro Kuroda","doi":"10.1007/s11548-024-03288-8","DOIUrl":"https://doi.org/10.1007/s11548-024-03288-8","url":null,"abstract":"<p><strong>Purpose: </strong>In image-guided surgery for breast cancer, the representation of the breast deformation between planning and surgery plays a key role. The breast deforms significantly and behaves as a fluid with some constraints. Concretely, the deep fat layer in the breast deforms fluidly due to its incomplete fixation to the chest wall, while the anchoring structures by fascia avoid excessive deformation. In this study, we propose a method to simulate the semi-fluid deformation of the breast, considering the fluidic properties of the adipose tissue under the constraints of the anchoring structures.</p><p><strong>Methods: </strong>The proposed method prioritizes anatomical features of the breast, enhancing tissue mobility near the chest wall and modeling the anchoring structure of the fascia along the inframammary fold. To simulate semi-fluid deformation, constraint force from anchoring structure is applied to prone-positioned breast model, using a finite element method.</p><p><strong>Results: </strong>The results of the evaluation indicate a tumor center registration error of 11.87 ± 4.05 mm. Additionally, we verified how semi-fluid representation affects the registration error. The tumor's Hausdorff distance decreased from 12.89 ± 6.24 mm to 11.50 ± 4.38 mm with considering semi-fluidity.</p><p><strong>Conclusion: </strong>The results showed that the use of semi-fluid representation tends to reduce registration errors. Therefore, it was suggested that the proposed method could improve the accuracy of breast posture conversion.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Radiation source detection for the accurate location of lymph node metastases using robotic forceps-type coincidence radiation detector.","authors":"Kazuya Kawamura, Ayano Nakajima, Shigeki Ito, Miwako Takahashi, Taiga Yamaya","doi":"10.1007/s11548-024-03296-8","DOIUrl":"https://doi.org/10.1007/s11548-024-03296-8","url":null,"abstract":"<p><strong>Purpose: </strong>We have developed a forceps-type coincidence radiation detector for supporting lymph node dissection in esophageal cancer treatment. For precise detecting, this study aims to measure the 2D point-spread function of the detector at three difference tip angles, to devise a method to determine the position of a point source using the 2D point-spread function.</p><p><strong>Method: </strong>The 2D sensitivity distribution on the surface of the detector was investigated to assess sensitivity variation caused by differences in the relative positions of the detector and radiation source. Based on the results, we identified the peak sensitivity value and proposed a detection method using this value. We evaluated the effectiveness of the proposed method by detecting radiation source location using this simulated distribution.</p><p><strong>Result: </strong>From the radiation sensitivity distribution measurements, we observed a gradual decrease in radiation detection sensitivity from the center toward the edges of the detector surface. Additionally, we verified that the peak sensitivity value was attainable. Through the basic verification of the detection method, we confirmed that the radiation source location could be detected within a maximum error of 1.4 mm.</p><p><strong>Conclusion: </strong>We developed a peak value search method aimed at mitigating sensitivity variations by leveraging the sensitivity distribution across the detector surface. The proposed device is thought to be able to quantitatively evaluate the desired target assuming that the field of view could be limited to the area clamped by the detector. As a next research step, more precise search methods should be verified in an environment resembling the one of the target clinical uses.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Avoiding missed opportunities in AI for radiology.","authors":"Jonathan Scheiner, Leonard Berliner","doi":"10.1007/s11548-024-03295-9","DOIUrl":"10.1007/s11548-024-03295-9","url":null,"abstract":"<p><strong>Purpose: </strong>In the last decade, the development of Deep Learning and its variants, based on the application of artificial neural networks, has reinvigorated Artificial Intelligence (AI). As a result, many new applications of AI in medicine, especially Radiology, have been introduced. This resurgence in AI, and its diverse clinical and nonclinical applications throughout healthcare, requires a thorough understanding to reap the potential benefits and avoid the potential pitfalls.</p><p><strong>Methods: </strong>To realize the full potential of AI in medicine, a highly coordinated approach should be undertaken to select, support and finance more highly focused AI projects. By studying and understanding the successes and failures, and strengths and limitations, of AI in Radiology, it is possible to seek and develop the most clinically relevant AI algorithms. The authors have reviewed their clinical practice regarding the use of AI to determine applications in which AI can add both clinical and remunerative benefits.</p><p><strong>Results: </strong>Review of our policies and applications regarding AI in the Department of Radiology emphasized that, at the time of this writing, AI has been useful in the detection of specific clinical entities for which the AI algorithms have been designed. In addition to helping to reduce diagnostic errors, AI offers an important opportunity to prioritize positive cases, such as pulmonary embolism or intracranial hemorrhage. It has become apparent that the detection of certain conditions, such as incidental and unsuspected cerebral aneurysms can be used to initiate a variety of patient-oriented activities. Finding an unsuspected brain aneurysm is not only of clinical importance to the patient, but the required clinical workup and management of the patient can help generate reimbursement that helps defray the cost of AI implementations. A program for screening, clinical management, and follow-up, facilitated by the AI detection of incidental brain aneurysms, has been implemented at our multi-hospital healthcare system.</p><p><strong>Conclusion: </strong>We feel that it is possible to avoid missed opportunities for AI in Radiology and create AI tools to enhance medical wisdom and improve patient care, within a fiscally responsive environment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2297-2300"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142711736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luca Wegener, Dirk Wilhelm, Maximilian Berlet, Jonas Fuchtmann
{"title":"Development of a human machine interface for robotically assisted surgery optimized for laparoscopic workflows.","authors":"Luca Wegener, Dirk Wilhelm, Maximilian Berlet, Jonas Fuchtmann","doi":"10.1007/s11548-024-03239-3","DOIUrl":"10.1007/s11548-024-03239-3","url":null,"abstract":"<p><strong>Introduction: </strong>In robotic-assisted surgery (RAS), the input device is the primary site for the flow of information between the user and the robot. Most RAS systems remove the surgeon's console from the sterile surgical site. Beneficial for performing lengthy procedures with complex systems, this ultimately lacks the flexibility that comes with the surgeon being able to remain at the sterile site.</p><p><strong>Methods: </strong>A prototype of an input device for RAS is constructed. The focus lies on intuitive control for surgeons and a seamless integration into the surgical workflow within the sterile environment. The kinematic design is translated from the kinematics of laparoscopic surgery. The input device uses three degrees of freedom from a flexible instrument as input. The prototype's performance is compared to that of a commercially available device in an evaluation. Metrics are used to evaluate the surgeons' performance with the respective input device in a virtual environment implemented for the evaluation.</p><p><strong>Results: </strong>The evaluation of the two input devices shows statistically significant differences in the performance metrics. With the proposed prototype, the surgeons perform the tasks faster, more precisely, and with fewer errors.</p><p><strong>Conclusion: </strong>The prototype is an efficient and intuitive input device for surgeons with laparoscopic experience. The placement in the sterile working area allows for seamless integration into the surgical workflow and can potentially enable new robotic approaches.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2301-2309"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141914558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Srdjan Milosavljevic, Zoltan Bardosi, Yusuf Oezbek, Wolfgang Freysinger
{"title":"Adaptive infrared patterns for microscopic surface reconstructions.","authors":"Srdjan Milosavljevic, Zoltan Bardosi, Yusuf Oezbek, Wolfgang Freysinger","doi":"10.1007/s11548-024-03242-8","DOIUrl":"10.1007/s11548-024-03242-8","url":null,"abstract":"<p><strong>Purpose: </strong>Multi-zoom microscopic surface reconstructions of operating sites, especially in ENT surgeries, would allow multimodal image fusion for determining the amount of resected tissue, for recognizing critical structures, and novel tools for intraoperative quality assurance. State-of-the-art three-dimensional model creation of the surgical scene is challenged by the surgical environment, illumination, and the homogeneous structures of skin, muscle, bones, etc., that lack invariant features for stereo reconstruction.</p><p><strong>Methods: </strong>An adaptive near-infrared pattern projector illuminates the surgical scene with optimized patterns to yield accurate dense multi-zoom stereoscopic surface reconstructions. The approach does not impact the clinical workflow. The new method is compared to state-of-the-art approaches and is validated by determining its reconstruction errors relative to a high-resolution 3D-reconstruction of CT data.</p><p><strong>Results: </strong>200 surface reconstructions were generated for 5 zoom levels with 10 reconstructions for each object illumination method (standard operating room light, microscope light, random pattern and adaptive NIR pattern). For the adaptive pattern, the surface reconstruction errors ranged from 0.5 to 0.7 mm, as compared to 1-1.9 mm for the other approaches. The local reconstruction differences are visualized in heat maps.</p><p><strong>Conclusion: </strong>Adaptive near-infrared (NIR) pattern projection in microscopic surgery allows dense and accurate microscopic surface reconstructions for variable zoom levels of small and homogeneous surfaces. This could potentially aid in microscopic interventions at the lateral skull base and potentially open up new possibilities for combining quantitative intraoperative surface reconstructions with preoperative radiologic imagery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2311-2319"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607032/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalisation capabilities of machine-learning algorithms for the detection of the subthalamic nucleus in micro-electrode recordings.","authors":"Thibault Martin, Pierre Jannin, John S H Baxter","doi":"10.1007/s11548-024-03202-2","DOIUrl":"10.1007/s11548-024-03202-2","url":null,"abstract":"<p><strong>Purpose: </strong>Micro-electrode recordings (MERs) are a key intra-operative modality used during deep brain stimulation (DBS) electrode implantation, which allow for a trained neurophysiologist to infer the anatomy in which the electrode is placed. As DBS targets are small, such inference is necessary to confirm that the electrode is correctly positioned. Recently, machine learning techniques have been used to augment the neurophysiologist's capability. The goal of this paper is to investigate the generalisability of these methods with respect to different clinical centres and training paradigms.</p><p><strong>Methods: </strong>Five deep learning algorithms for binary classification of MER signals have been implemented. Three databases from two different clinical centres have also been collected with differing size, acquisition hardware, and annotation protocol. Each algorithm has initially been trained on the largest database, then either directly tested or fine-tuned on the smaller databases in order to estimate their generalisability. As a reference, they have also been trained from scratch on the smaller databases as well in order to estimate the effect of the differing database sizes and annotation systems.</p><p><strong>Results: </strong>Each network shows significantly reduced performance (on the order of a 6.5% to 16.0% reduction in balanced accuracy) when applied out-of-distribution. This reduction can be ameliorated through fine-tuning the network on the new database through transfer learning. Although, even for these small databases, it appears that retraining from scratch may still offer equivalent performance as fine-tuning with transfer learning. However, this is at the expense of significantly longer training times.</p><p><strong>Conclusion: </strong>Generalisability is an important criterion for the success of machine learning algorithms in clinic. We have demonstrated that a variety of recent machine learning algorithms for MER classification are negatively affected by domain shift, but that this can be quickly ameliorated through simple transfer learning procedures that can be readily performed for new centres.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2445-2451"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141477935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juliane Neumann, Christoph Vogel, Lisa Kießling, Gunther Hempel, Christian Kleber, Georg Osterhoff, Thomas Neumuth
{"title":"TraumaFlow-development of a workflow-based clinical decision support system for the management of severe trauma cases.","authors":"Juliane Neumann, Christoph Vogel, Lisa Kießling, Gunther Hempel, Christian Kleber, Georg Osterhoff, Thomas Neumuth","doi":"10.1007/s11548-024-03191-2","DOIUrl":"10.1007/s11548-024-03191-2","url":null,"abstract":"<p><strong>Purpose: </strong>The treatment of severely injured patients in the resuscitation room of an emergency department requires numerous critical decisions, often under immense time pressure, which places very high demands on the facility and the interdisciplinary team. Computer-based cognitive aids are a valuable tool, especially in education and training of medical professionals. For the management of polytrauma cases, TraumaFlow, a workflow management-based clinical decision support system, was developed. The system supports the registration and coordination of activities in the resuscitation room and actively recommends diagnosis and treatment actions.</p><p><strong>Methods: </strong>Based on medical guidelines, a resuscitation room algorithm was developed according to the cABCDE scheme. The algorithm was then modeled using the process description language BPMN 2.0 and implemented in a workflow management system. In addition, a web-based user interface that provides assistance functions was developed. An evaluation study was conducted with 11 final-year medical students and three residents to assess the applicability of TraumaFlow in a case-based training scenario.</p><p><strong>Results: </strong>TraumaFlow significantly improved guideline-based decision-making, provided more complete therapy, and reduced treatment errors. The system was shown to be beneficial not only for the education of low- and medium-experienced users but also for the training of highly experienced physicians. 92% of the participants felt more confident with computer-aided decision support and considered TraumaFlow useful for the training of polytrauma treatment. In addition, 62% acknowledged a higher training effect.</p><p><strong>Conclusion: </strong>TraumaFlow enables real-time decision support for the treatment of polytrauma patients. It improves guideline-based decision-making in complex and critical situations and reduces treatment errors. Supporting functions, such as the automatic treatment documentation and the calculation of medical scores, enable the trauma team to focus on the primary task. TraumaFlow was developed to support the training of medical students and experienced professionals. Each training session is documented and can be objectively and qualitatively evaluated.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2399-2409"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607099/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141181468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Ostler-Mildner, Luca Wegener, Jonas Fuchtmann, Hubertus Feussner, Dirk Wilhelm, Nassir Navab
{"title":"The sound of surgery-development of an acoustic trocar system enabling laparoscopic sound analysis.","authors":"Daniel Ostler-Mildner, Luca Wegener, Jonas Fuchtmann, Hubertus Feussner, Dirk Wilhelm, Nassir Navab","doi":"10.1007/s11548-024-03183-2","DOIUrl":"10.1007/s11548-024-03183-2","url":null,"abstract":"<p><strong>Purpose: </strong>Acoustic information can contain viable information in medicine and specifically in surgery. While laparoscopy depends mainly on visual information, our goal is to develop the means to capture and process acoustic information during laparoscopic surgery.</p><p><strong>Methods: </strong>To achieve this, we iteratively developed three prototypes that will overcome the abdominal wall as a sound barrier and can be used with standard trocars. We evaluated them in terms of clinical applicability and sound transmission quality. Furthermore, the applicability of each prototype for sound classification based on machine learning was evaluated.</p><p><strong>Results: </strong>Our developed prototypes for recording airborne sound from the intraperitoneal cavity represent a promising solution suitable for real-world clinical usage All three prototypes fulfill our set requirements in terms of clinical applicability (i.e., air-tightness, invasiveness, sterility) and show promising results regarding their acoustic characteristics and the associated results on ML-based sound classification.</p><p><strong>Conclusion: </strong>In summary, our prototypes for capturing acoustic information during laparoscopic surgeries integrate seamlessly with existing procedures and have the potential to augment the surgeon's perception. This advancement could change how surgeons interact with and understand the surgical field.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2389-2397"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607030/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}