Wanchen Li, R. Passama, Vincent Bonnet, A. Cherubini
{"title":"A comparison of human skeleton extractors for real-time human-robot interaction","authors":"Wanchen Li, R. Passama, Vincent Bonnet, A. Cherubini","doi":"10.1109/ARSO56563.2023.10187411","DOIUrl":null,"url":null,"abstract":"Modern industrial manufacturing procedures gradually integrate physical Human-Robot interaction (pHRI) scenarios. This requires robots to understand human intentions for effective and safe cooperation. Vision is the most commonly used sensor modality for robots to perceive human behavior. In this paper, we compare various vision-based human skeleton extraction frameworks, to provide guidance for the design of human-robot interaction applications. We run various skeleton extractors on a video of a person working with the help of a dual-arm collaborative robot, in a scenario simulating a typical human-robot workspace. By comparing the outcomes of the various skeleton extractors, we justify our choices according to pHRI constraints.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARSO56563.2023.10187411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Modern industrial manufacturing procedures gradually integrate physical Human-Robot interaction (pHRI) scenarios. This requires robots to understand human intentions for effective and safe cooperation. Vision is the most commonly used sensor modality for robots to perceive human behavior. In this paper, we compare various vision-based human skeleton extraction frameworks, to provide guidance for the design of human-robot interaction applications. We run various skeleton extractors on a video of a person working with the help of a dual-arm collaborative robot, in a scenario simulating a typical human-robot workspace. By comparing the outcomes of the various skeleton extractors, we justify our choices according to pHRI constraints.