David G. Black, Y. Yazdi, Amir Hossein Hadi Hosseinabadi, S. Salcudean
{"title":"人类远程操作-触觉启用的远程超声混合现实系统","authors":"David G. Black, Y. Yazdi, Amir Hossein Hadi Hosseinabadi, S. Salcudean","doi":"10.36227/techrxiv.15175869","DOIUrl":null,"url":null,"abstract":"\n \n \n Current teleguidance methods include verbal guidance and robotic teleoperation, which present tradeoffs between\nprecision and latency versus flexibility and cost. We present\na novel concept of \"human teleoperation\" which bridges the\ngap between these two methods. A prototype teleultrasound\nsystem was implemented which shows the concept’s efficacy. An\nexpert remotely \"teloperates\" a person (the follower) wearing a\nmixed reality headset by controlling a virtual ultrasound probe\nprojected into the person’s scene. The follower matches the pose\nand force of the virtual device with a real probe. The pose,\nforce, video, ultrasound images, and 3-dimensional mesh of the\nscene are fed back to the expert. In this control framework,\nthe input and the actuation are carried out by people, but with\nnear robot-like latency and precision. This allows teleguidance\nthat is more precise and fast than verbal guidance, yet more\nflexible and inexpensive than robotic teleoperation. The system\nwas subjected to tests that show its effectiveness, including mean\nteleoperation latencies of 0.27 seconds and errors of 7 mm and\n6◦ in pose tracking. The system was also tested with an expert\nultrasonographer and four patients and was found to improve\nthe precision and speed of two teleultrasound procedures.\n\n \n \n","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"15 1","pages":""},"PeriodicalIF":4.5000,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Human Teleoperation - A Haptically Enabled Mixed Reality System for Teleultrasound\",\"authors\":\"David G. Black, Y. Yazdi, Amir Hossein Hadi Hosseinabadi, S. Salcudean\",\"doi\":\"10.36227/techrxiv.15175869\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n \\n \\n Current teleguidance methods include verbal guidance and robotic teleoperation, which present tradeoffs between\\nprecision and latency versus flexibility and cost. We present\\na novel concept of \\\"human teleoperation\\\" which bridges the\\ngap between these two methods. A prototype teleultrasound\\nsystem was implemented which shows the concept’s efficacy. An\\nexpert remotely \\\"teloperates\\\" a person (the follower) wearing a\\nmixed reality headset by controlling a virtual ultrasound probe\\nprojected into the person’s scene. The follower matches the pose\\nand force of the virtual device with a real probe. The pose,\\nforce, video, ultrasound images, and 3-dimensional mesh of the\\nscene are fed back to the expert. In this control framework,\\nthe input and the actuation are carried out by people, but with\\nnear robot-like latency and precision. This allows teleguidance\\nthat is more precise and fast than verbal guidance, yet more\\nflexible and inexpensive than robotic teleoperation. The system\\nwas subjected to tests that show its effectiveness, including mean\\nteleoperation latencies of 0.27 seconds and errors of 7 mm and\\n6◦ in pose tracking. The system was also tested with an expert\\nultrasonographer and four patients and was found to improve\\nthe precision and speed of two teleultrasound procedures.\\n\\n \\n \\n\",\"PeriodicalId\":56306,\"journal\":{\"name\":\"Human-Computer Interaction\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2021-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human-Computer Interaction\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.36227/techrxiv.15175869\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human-Computer Interaction","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.36227/techrxiv.15175869","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
Human Teleoperation - A Haptically Enabled Mixed Reality System for Teleultrasound
Current teleguidance methods include verbal guidance and robotic teleoperation, which present tradeoffs between
precision and latency versus flexibility and cost. We present
a novel concept of "human teleoperation" which bridges the
gap between these two methods. A prototype teleultrasound
system was implemented which shows the concept’s efficacy. An
expert remotely "teloperates" a person (the follower) wearing a
mixed reality headset by controlling a virtual ultrasound probe
projected into the person’s scene. The follower matches the pose
and force of the virtual device with a real probe. The pose,
force, video, ultrasound images, and 3-dimensional mesh of the
scene are fed back to the expert. In this control framework,
the input and the actuation are carried out by people, but with
near robot-like latency and precision. This allows teleguidance
that is more precise and fast than verbal guidance, yet more
flexible and inexpensive than robotic teleoperation. The system
was subjected to tests that show its effectiveness, including mean
teleoperation latencies of 0.27 seconds and errors of 7 mm and
6◦ in pose tracking. The system was also tested with an expert
ultrasonographer and four patients and was found to improve
the precision and speed of two teleultrasound procedures.
期刊介绍:
Human-Computer Interaction (HCI) is a multidisciplinary journal defining and reporting
on fundamental research in human-computer interaction. The goal of HCI is to be a journal
of the highest quality that combines the best research and design work to extend our
understanding of human-computer interaction. The target audience is the research
community with an interest in both the scientific implications and practical relevance of
how interactive computer systems should be designed and how they are actually used. HCI is
concerned with the theoretical, empirical, and methodological issues of interaction science
and system design as it affects the user.