用视觉-触觉模型学习机器人抓取

Shamin Varkey, Chikku Achy
{"title":"用视觉-触觉模型学习机器人抓取","authors":"Shamin Varkey, Chikku Achy","doi":"10.1109/ICCSDET.2018.8821091","DOIUrl":null,"url":null,"abstract":"The practice of grasping an object in humans, depend greatly on the feedback from tactile sensors. Nevertheless, the recent works of grasping in robotics, has been constructed only from visual input, but in this case the feedback after instigating contact cannot be easily benefited. A survey is done and presented in this paper to explore how the tactile information is used by the robot to learn to adjust its grasp proficiently. Additionally, an action-conditional model which uses raw visual- tactile data that learns grasping strategies is presented. The model presented iteratively selects the most favorable actions which implements the grasp. The approach does not require any analytical modeling of contact forces nor calibration of the tactile sensors, thereby decreasing the engineering requirements for obtaining a competent grasp strategy. The model, a two-finger gripper with tactile sensors of high-resolution on each finger was trained with data from various grasping trials. After a number of rigorous testing, it was seen that the approach had effectively learned useful and interpretable grasping behaviors. To conclude, the selections made by the model were studied and it was seen that it had effectively learned suitable and apt behaviors for grasping.","PeriodicalId":157362,"journal":{"name":"2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Learning Robotic Grasp using Visual-Tactile model\",\"authors\":\"Shamin Varkey, Chikku Achy\",\"doi\":\"10.1109/ICCSDET.2018.8821091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The practice of grasping an object in humans, depend greatly on the feedback from tactile sensors. Nevertheless, the recent works of grasping in robotics, has been constructed only from visual input, but in this case the feedback after instigating contact cannot be easily benefited. A survey is done and presented in this paper to explore how the tactile information is used by the robot to learn to adjust its grasp proficiently. Additionally, an action-conditional model which uses raw visual- tactile data that learns grasping strategies is presented. The model presented iteratively selects the most favorable actions which implements the grasp. The approach does not require any analytical modeling of contact forces nor calibration of the tactile sensors, thereby decreasing the engineering requirements for obtaining a competent grasp strategy. The model, a two-finger gripper with tactile sensors of high-resolution on each finger was trained with data from various grasping trials. After a number of rigorous testing, it was seen that the approach had effectively learned useful and interpretable grasping behaviors. To conclude, the selections made by the model were studied and it was seen that it had effectively learned suitable and apt behaviors for grasping.\",\"PeriodicalId\":157362,\"journal\":{\"name\":\"2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCSDET.2018.8821091\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCSDET.2018.8821091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

人类抓握物体的能力很大程度上依赖于触觉传感器的反馈。然而,最近的机器人抓取工作,只是从视觉输入构建的,在这种情况下,刺激接触后的反馈不能轻易受益。本文对机器人如何利用触觉信息来学习如何熟练地调整抓握进行了研究。此外,还提出了一种使用原始视觉-触觉数据学习抓取策略的动作条件模型。提出的模型迭代选择最有利的动作来实现抓握。该方法不需要任何接触力的分析建模,也不需要对触觉传感器进行校准,从而降低了获得有效抓取策略的工程要求。该模型是一个两指抓取器,每个手指上都有高分辨率的触觉传感器,该模型使用各种抓取试验的数据进行训练。经过一系列严格的测试,发现该方法有效地学习了有用且可解释的抓取行为。最后,对模型的选择进行了研究,发现该模型有效地学习了适合和适宜的抓取行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Learning Robotic Grasp using Visual-Tactile model
The practice of grasping an object in humans, depend greatly on the feedback from tactile sensors. Nevertheless, the recent works of grasping in robotics, has been constructed only from visual input, but in this case the feedback after instigating contact cannot be easily benefited. A survey is done and presented in this paper to explore how the tactile information is used by the robot to learn to adjust its grasp proficiently. Additionally, an action-conditional model which uses raw visual- tactile data that learns grasping strategies is presented. The model presented iteratively selects the most favorable actions which implements the grasp. The approach does not require any analytical modeling of contact forces nor calibration of the tactile sensors, thereby decreasing the engineering requirements for obtaining a competent grasp strategy. The model, a two-finger gripper with tactile sensors of high-resolution on each finger was trained with data from various grasping trials. After a number of rigorous testing, it was seen that the approach had effectively learned useful and interpretable grasping behaviors. To conclude, the selections made by the model were studied and it was seen that it had effectively learned suitable and apt behaviors for grasping.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信