Learning Tactile Models for Factor Graph-based Estimation

Paloma Sodhi, M. Kaess, Mustafa Mukadam, Stuart Anderson
{"title":"Learning Tactile Models for Factor Graph-based Estimation","authors":"Paloma Sodhi, M. Kaess, Mustafa Mukadam, Stuart Anderson","doi":"10.1109/ICRA48506.2021.9561011","DOIUrl":null,"url":null,"abstract":"We’re interested in the problem of estimating object states from touch during manipulation under occlusions. In this work, we address the problem of estimating object poses from touch during planar pushing. Vision-based tactile sensors provide rich, local image measurements at the point of contact. A single such measurement, however, contains limited information and multiple measurements are needed to infer latent object state. We solve this inference problem using a factor graph. In order to incorporate tactile measurements in the graph, we need local observation models that can map highdimensional tactile images onto a low-dimensional state space. Prior work has used low-dimensional force measurements or engineered functions to interpret tactile measurements. These methods, however, can be brittle and difficult to scale across objects and sensors. Our key insight is to directly learn tactile observation models that predict the relative pose of the sensor given a pair of tactile images. These relative poses can then be incorporated as factors within a factor graph. We propose a two-stage approach: first we learn local tactile observation models supervised with ground truth data, and then integrate these models along with physics and geometric factors within a factor graph optimizer. We demonstrate reliable object tracking using only tactile feedback for ~150 real-world planar pushing sequences with varying trajectories across three object shapes.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA48506.2021.9561011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

Abstract

We’re interested in the problem of estimating object states from touch during manipulation under occlusions. In this work, we address the problem of estimating object poses from touch during planar pushing. Vision-based tactile sensors provide rich, local image measurements at the point of contact. A single such measurement, however, contains limited information and multiple measurements are needed to infer latent object state. We solve this inference problem using a factor graph. In order to incorporate tactile measurements in the graph, we need local observation models that can map highdimensional tactile images onto a low-dimensional state space. Prior work has used low-dimensional force measurements or engineered functions to interpret tactile measurements. These methods, however, can be brittle and difficult to scale across objects and sensors. Our key insight is to directly learn tactile observation models that predict the relative pose of the sensor given a pair of tactile images. These relative poses can then be incorporated as factors within a factor graph. We propose a two-stage approach: first we learn local tactile observation models supervised with ground truth data, and then integrate these models along with physics and geometric factors within a factor graph optimizer. We demonstrate reliable object tracking using only tactile feedback for ~150 real-world planar pushing sequences with varying trajectories across three object shapes.
基于因子图估计的触觉模型学习
我们感兴趣的问题是在咬合的情况下,通过触摸来估计物体状态。在这项工作中,我们解决了在平面推动过程中根据触摸估计物体姿态的问题。基于视觉的触觉传感器在接触点提供丰富的局部图像测量。然而,单个这样的测量包含有限的信息,并且需要多个测量来推断潜在的对象状态。我们用因子图来解决这个推理问题。为了在图中包含触觉测量,我们需要局部观察模型,该模型可以将高维触觉图像映射到低维状态空间。先前的工作使用低维力测量或工程功能来解释触觉测量。然而,这些方法可能很脆弱,难以跨对象和传感器进行扩展。我们的关键见解是直接学习触觉观察模型,该模型可以预测给定一对触觉图像的传感器的相对姿态。然后,这些相对姿势可以作为因子图中的因子合并。我们提出了一种两阶段的方法:首先,我们学习由地面真实数据监督的局部触觉观察模型,然后将这些模型与物理和几何因素整合到因子图优化器中。我们展示了可靠的对象跟踪仅使用触觉反馈约150个真实世界的平面推进序列与不同的轨迹跨越三个对象形状。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信