Dynamic scenario-enhanced diverse human motion prediction network for proactive human–robot collaboration in customized assembly tasks

IF 5.9 2区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Pengfei Ding, Jie Zhang, Pai Zheng, Peng Zhang, Bo Fei, Ziqi Xu
{"title":"Dynamic scenario-enhanced diverse human motion prediction network for proactive human–robot collaboration in customized assembly tasks","authors":"Pengfei Ding, Jie Zhang, Pai Zheng, Peng Zhang, Bo Fei, Ziqi Xu","doi":"10.1007/s10845-024-02462-8","DOIUrl":null,"url":null,"abstract":"<p>Human motion prediction is crucial for facilitating human–robot collaboration in customized assembly tasks. However, existing research primarily focuses on predicting limited human motions using static global information, which fails to address the highly stochastic nature of customized assembly operations in a given region. To address this, we propose a dynamic scenario-enhanced diverse human motion prediction network that extracts dynamic collaborative features to predict highly stochastic customized assembly operations. In this paper, we present a multi-level feature adaptation network that generates information for dynamically manipulating objects. This is accomplished by extracting multi-attribute features at different levels, including multi-channel gaze tracking, multi-scale object affordance detection, and multi-modal object’s 6 degree-of-freedom pose estimation. Notably, we employ gaze tracking to locate the collaborative space accurately. Furthermore, we introduce a multi-step feedback-refined diffusion sampling network specifically designed for predicting highly stochastic customized assembly operations. This network refines the outcomes of our proposed multi-weight diffusion sampling strategy to better align with the target distribution. Additionally, we develop a feedback regulatory mechanism that incorporates ground truth information in each prediction step to ensure the reliability of the results. Finally, the effectiveness of the proposed method was demonstrated through comparative experiments and validation of assembly tasks in a laboratory environment.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"47 1","pages":""},"PeriodicalIF":5.9000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent Manufacturing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s10845-024-02462-8","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Human motion prediction is crucial for facilitating human–robot collaboration in customized assembly tasks. However, existing research primarily focuses on predicting limited human motions using static global information, which fails to address the highly stochastic nature of customized assembly operations in a given region. To address this, we propose a dynamic scenario-enhanced diverse human motion prediction network that extracts dynamic collaborative features to predict highly stochastic customized assembly operations. In this paper, we present a multi-level feature adaptation network that generates information for dynamically manipulating objects. This is accomplished by extracting multi-attribute features at different levels, including multi-channel gaze tracking, multi-scale object affordance detection, and multi-modal object’s 6 degree-of-freedom pose estimation. Notably, we employ gaze tracking to locate the collaborative space accurately. Furthermore, we introduce a multi-step feedback-refined diffusion sampling network specifically designed for predicting highly stochastic customized assembly operations. This network refines the outcomes of our proposed multi-weight diffusion sampling strategy to better align with the target distribution. Additionally, we develop a feedback regulatory mechanism that incorporates ground truth information in each prediction step to ensure the reliability of the results. Finally, the effectiveness of the proposed method was demonstrated through comparative experiments and validation of assembly tasks in a laboratory environment.

Abstract Image

用于定制装配任务中主动人机协作的动态场景增强型多样化人类运动预测网络
人类运动预测对于促进定制装配任务中的人机协作至关重要。然而,现有研究主要侧重于使用静态全局信息预测有限的人类运动,无法解决特定区域内定制装配操作的高度随机性问题。为此,我们提出了一种动态场景增强型多样化人类运动预测网络,该网络可提取动态协作特征,预测高度随机的定制装配操作。在本文中,我们提出了一种多级特征适应网络,可生成动态操控物体的信息。这是通过提取不同层次的多属性特征来实现的,包括多通道注视跟踪、多尺度物体承受力检测和多模态物体的 6 自由度姿态估计。值得注意的是,我们利用目光跟踪来准确定位协作空间。此外,我们还引入了多步反馈精炼扩散采样网络,专门用于预测高度随机的定制装配操作。该网络改进了我们提出的多权重扩散采样策略的结果,使其更好地符合目标分布。此外,我们还开发了一种反馈调节机制,将地面实况信息纳入每个预测步骤,以确保结果的可靠性。最后,通过在实验室环境中对装配任务进行对比实验和验证,证明了所提方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Intelligent Manufacturing
Journal of Intelligent Manufacturing 工程技术-工程:制造
CiteScore
19.30
自引率
9.60%
发文量
171
审稿时长
5.2 months
期刊介绍: The Journal of Nonlinear Engineering aims to be a platform for sharing original research results in theoretical, experimental, practical, and applied nonlinear phenomena within engineering. It serves as a forum to exchange ideas and applications of nonlinear problems across various engineering disciplines. Articles are considered for publication if they explore nonlinearities in engineering systems, offering realistic mathematical modeling, utilizing nonlinearity for new designs, stabilizing systems, understanding system behavior through nonlinearity, optimizing systems based on nonlinear interactions, and developing algorithms to harness and leverage nonlinear elements.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信