Benjamin D Killeen, Jonas Winter, Wenhao Gu, Alejandro Martin-Gomez, Russell H Taylor, Greg Osgood, Mathias Unberath
{"title":"利用机器人 X 射线系统实现所需视角的混合现实界面","authors":"Benjamin D Killeen, Jonas Winter, Wenhao Gu, Alejandro Martin-Gomez, Russell H Taylor, Greg Osgood, Mathias Unberath","doi":"10.1080/21681163.2022.2154272","DOIUrl":null,"url":null,"abstract":"<p><p>Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during \"fluoro hunting\" for the desired view or standard plane.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10406465/pdf/","citationCount":"3","resultStr":"{\"title\":\"Mixed Reality Interfaces for Achieving Desired Views with Robotic X-ray Systems.\",\"authors\":\"Benjamin D Killeen, Jonas Winter, Wenhao Gu, Alejandro Martin-Gomez, Russell H Taylor, Greg Osgood, Mathias Unberath\",\"doi\":\"10.1080/21681163.2022.2154272\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during \\\"fluoro hunting\\\" for the desired view or standard plane.</p>\",\"PeriodicalId\":51800,\"journal\":{\"name\":\"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10406465/pdf/\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/21681163.2022.2154272\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2022/12/7 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/21681163.2022.2154272","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/12/7 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 3
摘要
机器人 X 射线 C 臂成像系统可以精确地实现相对于病人的任何位置和方向。然而,告知系统什么姿势与所需视图完全对应是一项挑战。目前,外科医生使用操纵杆来操作这些系统,但这种交互模式并不一定有效,因为用户可能无法同时有效地操纵系统的多个轴。此外,新型机器人成像系统(如 Brainlab Loop-X)允许独立的信号源和检测器运动,从而增加了更多的复杂性。为了应对这一挑战,我们考虑了外科医生有效指挥机器人 X 射线系统的互补界面。具体来说,我们考虑了三种交互范式:(1) 使用指针指定所需视图相对于解剖结构的主射线;(2) 使用相同的指针,但结合混合现实环境,根据工具的姿势同步渲染数字重建的射线照片;(3) 使用相同的混合现实环境,但用虚拟 X 射线源代替指针。与一名创伤外科主治医生进行的初步人环评估表明,用于机器人 X 射线系统控制的混合现实界面很有前景,可能有助于大幅减少仅在 "荧光搜索 "所需视图或标准平面时获取的 X 射线图像数量。
Mixed Reality Interfaces for Achieving Desired Views with Robotic X-ray Systems.
Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during "fluoro hunting" for the desired view or standard plane.
期刊介绍:
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization is an international journal whose main goals are to promote solutions of excellence for both imaging and visualization of biomedical data, and establish links among researchers, clinicians, the medical technology sector and end-users. The journal provides a comprehensive forum for discussion of the current state-of-the-art in the scientific fields related to imaging and visualization, including, but not limited to: Applications of Imaging and Visualization Computational Bio- imaging and Visualization Computer Aided Diagnosis, Surgery, Therapy and Treatment Data Processing and Analysis Devices for Imaging and Visualization Grid and High Performance Computing for Imaging and Visualization Human Perception in Imaging and Visualization Image Processing and Analysis Image-based Geometric Modelling Imaging and Visualization in Biomechanics Imaging and Visualization in Biomedical Engineering Medical Clinics Medical Imaging and Visualization Multi-modal Imaging and Visualization Multiscale Imaging and Visualization Scientific Visualization Software Development for Imaging and Visualization Telemedicine Systems and Applications Virtual Reality Visual Data Mining and Knowledge Discovery.