Chenhao Yang, Donghui Zhao, Junyou Yang, Qianlong Wang, Ruoqian Wang
{"title":"基于视觉的喂食机器人喂食意图识别","authors":"Chenhao Yang, Donghui Zhao, Junyou Yang, Qianlong Wang, Ruoqian Wang","doi":"10.1117/12.3030487","DOIUrl":null,"url":null,"abstract":"With the arrival of population aging, the number of disabled people has increased, and existing populations with physical impairments. The dining problem is one of the most important problems they must solve. The feeding robot system has been introduced into the auxiliary nursing scene to reduce the burden of nursing staff. Multiple types of feeding robots have been developed. However most existing feeding robot systems still suffer from issues related to insufficient intelligence and convenience, with limited attention to user intention. To address this issue, we propose a vision-based algorithm for the interaction between the robot and users. This method effectively identifies user intentions for dining, menu selection, and chewing dynamics during meals. It enables the robot to operate more intelligently by the user’s intention without additional wearable devices, significantly enhancing user comfort and convenience. We conducted a series of experiments on dining intentions, selection menu intentions, and chewing dynamics during meals. The experimental results show that the average recognition rate of users’ dining intention is 98%, and the average recognition rate of chewing dynamics is 86.53%. This contribution presents an interactive approach for individuals without mobility, enhancing the intelligence of the feeding robot. It holds promise for future applications in nursing scenarios.","PeriodicalId":198425,"journal":{"name":"Other Conferences","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual-based feeding intention recognition for feeding robots\",\"authors\":\"Chenhao Yang, Donghui Zhao, Junyou Yang, Qianlong Wang, Ruoqian Wang\",\"doi\":\"10.1117/12.3030487\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the arrival of population aging, the number of disabled people has increased, and existing populations with physical impairments. The dining problem is one of the most important problems they must solve. The feeding robot system has been introduced into the auxiliary nursing scene to reduce the burden of nursing staff. Multiple types of feeding robots have been developed. However most existing feeding robot systems still suffer from issues related to insufficient intelligence and convenience, with limited attention to user intention. To address this issue, we propose a vision-based algorithm for the interaction between the robot and users. This method effectively identifies user intentions for dining, menu selection, and chewing dynamics during meals. It enables the robot to operate more intelligently by the user’s intention without additional wearable devices, significantly enhancing user comfort and convenience. We conducted a series of experiments on dining intentions, selection menu intentions, and chewing dynamics during meals. The experimental results show that the average recognition rate of users’ dining intention is 98%, and the average recognition rate of chewing dynamics is 86.53%. This contribution presents an interactive approach for individuals without mobility, enhancing the intelligence of the feeding robot. It holds promise for future applications in nursing scenarios.\",\"PeriodicalId\":198425,\"journal\":{\"name\":\"Other Conferences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Other Conferences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.3030487\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Other Conferences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3030487","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Visual-based feeding intention recognition for feeding robots
With the arrival of population aging, the number of disabled people has increased, and existing populations with physical impairments. The dining problem is one of the most important problems they must solve. The feeding robot system has been introduced into the auxiliary nursing scene to reduce the burden of nursing staff. Multiple types of feeding robots have been developed. However most existing feeding robot systems still suffer from issues related to insufficient intelligence and convenience, with limited attention to user intention. To address this issue, we propose a vision-based algorithm for the interaction between the robot and users. This method effectively identifies user intentions for dining, menu selection, and chewing dynamics during meals. It enables the robot to operate more intelligently by the user’s intention without additional wearable devices, significantly enhancing user comfort and convenience. We conducted a series of experiments on dining intentions, selection menu intentions, and chewing dynamics during meals. The experimental results show that the average recognition rate of users’ dining intention is 98%, and the average recognition rate of chewing dynamics is 86.53%. This contribution presents an interactive approach for individuals without mobility, enhancing the intelligence of the feeding robot. It holds promise for future applications in nursing scenarios.