{"title":"人机协同操作的软材料:使机器人手动引导使用深度图反馈","authors":"G. Nicola, E. Villagrossi, N. Pedrocchi","doi":"10.1109/RO-MAN53752.2022.9900710","DOIUrl":null,"url":null,"abstract":"Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Human-robot co-manipulation of soft materials: enable a robot manual guidance using a depth map feedback\",\"authors\":\"G. Nicola, E. Villagrossi, N. Pedrocchi\",\"doi\":\"10.1109/RO-MAN53752.2022.9900710\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.\",\"PeriodicalId\":250997,\"journal\":{\"name\":\"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN53752.2022.9900710\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN53752.2022.9900710","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human-robot co-manipulation of soft materials: enable a robot manual guidance using a depth map feedback
Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.