Annual Meeting of the IEEE Industry Applications Society最新文献

筛选
英文 中文
Uncertainty Estimation for Safe Human-Robot Collaboration using Conservation Measures 基于守恒措施的安全人机协作的不确定性估计
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2022-09-01 DOI: 10.48550/arXiv.2209.00467
W.-J. Baek, C. Ledermann, T. Kröger
{"title":"Uncertainty Estimation for Safe Human-Robot Collaboration using Conservation Measures","authors":"W.-J. Baek, C. Ledermann, T. Kröger","doi":"10.48550/arXiv.2209.00467","DOIUrl":"https://doi.org/10.48550/arXiv.2209.00467","url":null,"abstract":". We present an online and data-driven uncertainty quantification method to enable the development of safe human-robot collaboration applications. Safety and risk assessment of systems are strongly correlated with the accuracy of measurements: Distinctive parameters are often not directly accessible via known models and must therefore be measured. However, measurements generally suffer from uncertainties due to the limited performance of sensors, even unknown environmental disturbances, or humans. In this work, we quantify these measurement uncertainties by making use of conservation measures which are quan-titative, system specific properties that are constant over time, space, or other state space dimensions. The key idea of our method lies in the immediate data evaluation of incoming data during run-time referring to conservation equations. In particular, we estimate violations of a-priori known, domain specific conservation properties and consider them as the consequence of measurement uncertainties. We validate our method on a use case in the context of human-robot collaboration, thereby highlighting the importance of our contribution for the successful development of safe robot systems under real-world conditions, e. g. , in industrial environments. In addition, we show how obtained uncertainty values can be directly mapped on arbitrary safety limits (e.g, ISO 13849) which allows to monitor the compliance with safety standards during run-time.","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116552085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Gestural and Touchscreen Interaction for Human-Robot Collaboration: a Comparative Study 人机协作中的手势和触摸屏交互:比较研究
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2022-07-08 DOI: 10.48550/arXiv.2207.03783
Antonino Bongiovanni, A. Luca, Luna Gava, Lucrezia Grassi, Marta Lagomarsino, M. Lapolla, Antonio Marino, Patrick Roncagliolo, Simone Macciò, A. Carfì, F. Mastrogiovanni
{"title":"Gestural and Touchscreen Interaction for Human-Robot Collaboration: a Comparative Study","authors":"Antonino Bongiovanni, A. Luca, Luna Gava, Lucrezia Grassi, Marta Lagomarsino, M. Lapolla, Antonio Marino, Patrick Roncagliolo, Simone Macciò, A. Carfì, F. Mastrogiovanni","doi":"10.48550/arXiv.2207.03783","DOIUrl":"https://doi.org/10.48550/arXiv.2207.03783","url":null,"abstract":"Close human-robot interaction (HRI), especially in industrial scenarios, has been vastly investigated for the advantages of combining human and robot skills. For an effective HRI, the validity of currently available human-machine communication media or tools should be questioned, and new communication modalities should be explored. This article proposes a modular architecture allowing human operators to interact with robots through different modalities. In particular, we implemented the architecture to handle gestural and touchscreen input, respectively, using a smartwatch and a tablet. Finally, we performed a comparative user experience study between these two modalities.","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132084265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Real2Sim or Sim2Real: Robotics Visual Insertion using Deep Reinforcement Learning and Real2Sim Policy Adaptation Real2Sim或Sim2Real:使用深度强化学习和Real2Sim策略适应的机器人视觉插入
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2022-06-06 DOI: 10.48550/arXiv.2206.02679
Yiwen Chen, Xue-Yong Li, Sheng Guo, Xiang Yao Ng, Marcelo H ANG Jr
{"title":"Real2Sim or Sim2Real: Robotics Visual Insertion using Deep Reinforcement Learning and Real2Sim Policy Adaptation","authors":"Yiwen Chen, Xue-Yong Li, Sheng Guo, Xiang Yao Ng, Marcelo H ANG Jr","doi":"10.48550/arXiv.2206.02679","DOIUrl":"https://doi.org/10.48550/arXiv.2206.02679","url":null,"abstract":". Reinforcement learning has shown a wide usage in robotics tasks, such as insertion and grasping. However, without a practical sim2real strategy, the policy trained in simulation could fail on the real task. There are also wide re-searches in the sim2real strategies, but most of those methods rely on heavy image rendering, domain randomization training, or tuning. In this work, we solve the insertion task using a pure visual reinforcement learning solution with minimum infrastructure requirement. We also propose a novel sim2real strategy, Real2Sim, which provides a novel and easier solution in policy adaptation. We discuss the advantage of Real2Sim compared with Sim2Real.","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132143358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
YOLOPose: Transformer-based Multi-Object 6D Pose Estimation using Keypoint Regression 基于关键点回归的多目标6D姿态估计
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2022-05-05 DOI: 10.48550/arXiv.2205.02536
A. Amini, Arul Selvam Periyasamy, Sven Behnke
{"title":"YOLOPose: Transformer-based Multi-Object 6D Pose Estimation using Keypoint Regression","authors":"A. Amini, Arul Selvam Periyasamy, Sven Behnke","doi":"10.48550/arXiv.2205.02536","DOIUrl":"https://doi.org/10.48550/arXiv.2205.02536","url":null,"abstract":"6D object pose estimation is a crucial prerequisite for autonomous robot manipulation applications. The state-of-the-art models for pose estimation are convolutional neural network (CNN)-based. Lately, Transformers, an architecture originally proposed for natural language processing, is achieving state-of-the-art results in many computer vision tasks as well. Equipped with the multi-head self-attention mechanism, Transformers enable simple single-stage end-to-end architectures for learning object detection and 6D object pose estimation jointly. In this work, we propose YOLOPose (short form for You Only Look Once Pose estimation), a Transformer-based multi-object 6D pose estimation method based on keypoint regression. In contrast to the standard heatmaps for predicting keypoints in an image, we directly regress the keypoints. Additionally, we employ a learnable orientation estimation module to predict the orientation from the keypoints. Along with a separate translation estimation module, our model is end-to-end differentiable. Our method is suitable for real-time applications and achieves results comparable to state-of-the-art methods.","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"52 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128339410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Learning Sequential Latent Variable Models from Multimodal Time Series Data 从多模态时间序列数据中学习顺序潜变量模型
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2022-04-21 DOI: 10.1007/978-3-031-22216-0_35
Oliver Limoyo, T. Ablett, Jonathan Kelly
{"title":"Learning Sequential Latent Variable Models from Multimodal Time Series Data","authors":"Oliver Limoyo, T. Ablett, Jonathan Kelly","doi":"10.1007/978-3-031-22216-0_35","DOIUrl":"https://doi.org/10.1007/978-3-031-22216-0_35","url":null,"abstract":"","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117241846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Evaluation of RGB-D-based Categorical Pose and Shape Estimation 基于rgb - d的分类姿态和形状估计的评价
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2022-02-21 DOI: 10.1007/978-3-031-22216-0_25
Leonard Bruns, P. Jensfelt
{"title":"On the Evaluation of RGB-D-based Categorical Pose and Shape Estimation","authors":"Leonard Bruns, P. Jensfelt","doi":"10.1007/978-3-031-22216-0_25","DOIUrl":"https://doi.org/10.1007/978-3-031-22216-0_25","url":null,"abstract":"","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131235489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sensor-Based Navigation Using Hierarchical Reinforcement Learning 基于传感器的分层强化学习导航
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2021-08-30 DOI: 10.1007/978-3-031-22216-0_37
Christoph Gebauer, Nils Dengler, Maren Bennewitz
{"title":"Sensor-Based Navigation Using Hierarchical Reinforcement Learning","authors":"Christoph Gebauer, Nils Dengler, Maren Bennewitz","doi":"10.1007/978-3-031-22216-0_37","DOIUrl":"https://doi.org/10.1007/978-3-031-22216-0_37","url":null,"abstract":"","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123542927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Grasp Pose Generation for Parallel Jaw Grippers 平行颚式抓取器的自动抓取姿势生成
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2021-04-23 DOI: 10.1007/978-3-030-95892-3_45
Kilian Kleeberger, Florian Roth, Richard Bormann, Marco F. Huber
{"title":"Automatic Grasp Pose Generation for Parallel Jaw Grippers","authors":"Kilian Kleeberger, Florian Roth, Richard Bormann, Marco F. Huber","doi":"10.1007/978-3-030-95892-3_45","DOIUrl":"https://doi.org/10.1007/978-3-030-95892-3_45","url":null,"abstract":"","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"324 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123306393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Robotic Cooking Through Pose Extraction from Human Natural Cooking Using OpenPose 利用OpenPose从人类自然烹饪中提取姿势的机器人烹饪
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2021-04-06 DOI: 10.17863/CAM.66627
Dylan Danno, Simon Hauser, F. Iida
{"title":"Robotic Cooking Through Pose Extraction from Human Natural Cooking Using OpenPose","authors":"Dylan Danno, Simon Hauser, F. Iida","doi":"10.17863/CAM.66627","DOIUrl":"https://doi.org/10.17863/CAM.66627","url":null,"abstract":"","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"44 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116549727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Probabilistic Collision Constraint for Motion Planning in Dynamic Environments 动态环境下运动规划的概率碰撞约束
Annual Meeting of the IEEE Industry Applications Society Pub Date : 2021-04-04 DOI: 10.1007/978-3-030-95892-3_11
Antony Thomas, F. Mastrogiovanni, M. Baglietto
{"title":"Probabilistic Collision Constraint for Motion Planning in Dynamic Environments","authors":"Antony Thomas, F. Mastrogiovanni, M. Baglietto","doi":"10.1007/978-3-030-95892-3_11","DOIUrl":"https://doi.org/10.1007/978-3-030-95892-3_11","url":null,"abstract":"","PeriodicalId":385930,"journal":{"name":"Annual Meeting of the IEEE Industry Applications Society","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132710050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信