{"title":"基于视觉伺服的机器人肾脏超声成像图像搜索策略","authors":"Takumi Fujibayashi, Norihiro Koizumi, Yu Nishiyama, Jiayi Zhou, Hiroyuki Tsukihara, Kiyoshi Yoshinaka, Ryosuke Tsumura","doi":"10.20965/jrm.2023.p1281","DOIUrl":null,"url":null,"abstract":"Ultrasound (US) imaging is beneficial for kidney diagnosis; however, it involves sophisticated tasks that must be performed by physicians to obtain the target image. We propose a target-image search strategy combining visual servoing and deep learning-based image evaluation for robotic kidney US imaging. The search strategy is designed by mimicking physicians’ motion axis of the US probe. By controlling the position of the US probe along each of the motion axes while evaluating the obtained US images based on an anatomical feature extraction method via instance segmentation with YOLACT++, we are able to search for an optimal target image. The proposed approach was validated through phantom studies. The results showed that the proposed approach could find the target kidney images with error rates of 2.88±1.76 mm and 2.75±3.36°. Thus, the proposed method enables the accurate identification of the target image, which highlights its potential for application in autonomous kidney US imaging.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":"68 10","pages":"0"},"PeriodicalIF":0.9000,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Image Search Strategy via Visual Servoing for Robotic Kidney Ultrasound Imaging\",\"authors\":\"Takumi Fujibayashi, Norihiro Koizumi, Yu Nishiyama, Jiayi Zhou, Hiroyuki Tsukihara, Kiyoshi Yoshinaka, Ryosuke Tsumura\",\"doi\":\"10.20965/jrm.2023.p1281\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Ultrasound (US) imaging is beneficial for kidney diagnosis; however, it involves sophisticated tasks that must be performed by physicians to obtain the target image. We propose a target-image search strategy combining visual servoing and deep learning-based image evaluation for robotic kidney US imaging. The search strategy is designed by mimicking physicians’ motion axis of the US probe. By controlling the position of the US probe along each of the motion axes while evaluating the obtained US images based on an anatomical feature extraction method via instance segmentation with YOLACT++, we are able to search for an optimal target image. The proposed approach was validated through phantom studies. The results showed that the proposed approach could find the target kidney images with error rates of 2.88±1.76 mm and 2.75±3.36°. Thus, the proposed method enables the accurate identification of the target image, which highlights its potential for application in autonomous kidney US imaging.\",\"PeriodicalId\":51661,\"journal\":{\"name\":\"Journal of Robotics and Mechatronics\",\"volume\":\"68 10\",\"pages\":\"0\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2023-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Robotics and Mechatronics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.20965/jrm.2023.p1281\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Robotics and Mechatronics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20965/jrm.2023.p1281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ROBOTICS","Score":null,"Total":0}
Image Search Strategy via Visual Servoing for Robotic Kidney Ultrasound Imaging
Ultrasound (US) imaging is beneficial for kidney diagnosis; however, it involves sophisticated tasks that must be performed by physicians to obtain the target image. We propose a target-image search strategy combining visual servoing and deep learning-based image evaluation for robotic kidney US imaging. The search strategy is designed by mimicking physicians’ motion axis of the US probe. By controlling the position of the US probe along each of the motion axes while evaluating the obtained US images based on an anatomical feature extraction method via instance segmentation with YOLACT++, we are able to search for an optimal target image. The proposed approach was validated through phantom studies. The results showed that the proposed approach could find the target kidney images with error rates of 2.88±1.76 mm and 2.75±3.36°. Thus, the proposed method enables the accurate identification of the target image, which highlights its potential for application in autonomous kidney US imaging.
期刊介绍:
First published in 1989, the Journal of Robotics and Mechatronics (JRM) has the longest publication history in the world in this field, publishing a total of over 2,000 works exclusively on robotics and mechatronics from the first number. The Journal publishes academic papers, development reports, reviews, letters, notes, and discussions. The JRM is a peer-reviewed journal in fields such as robotics, mechatronics, automation, and system integration. Its editorial board includes wellestablished researchers and engineers in the field from the world over. The scope of the journal includes any and all topics on robotics and mechatronics. As a key technology in robotics and mechatronics, it includes actuator design, motion control, sensor design, sensor fusion, sensor networks, robot vision, audition, mechanism design, robot kinematics and dynamics, mobile robot, path planning, navigation, SLAM, robot hand, manipulator, nano/micro robot, humanoid, service and home robots, universal design, middleware, human-robot interaction, human interface, networked robotics, telerobotics, ubiquitous robot, learning, and intelligence. The scope also includes applications of robotics and automation, and system integrations in the fields of manufacturing, construction, underwater, space, agriculture, sustainability, energy conservation, ecology, rescue, hazardous environments, safety and security, dependability, medical, and welfare.