Safe and Flexible Collaborative Assembly Processes Using Behavior Trees and Computer Vision

Minh Trinh, David Kötter, Ariane Chu, Mohamed H. Behery, G. Lakemeyer, O. Petrovic, C. Brecher
{"title":"Safe and Flexible Collaborative Assembly Processes Using Behavior Trees\n and Computer Vision","authors":"Minh Trinh, David Kötter, Ariane Chu, Mohamed H. Behery, G. Lakemeyer, O. Petrovic, C. Brecher","doi":"10.54941/ahfe1002912","DOIUrl":null,"url":null,"abstract":"Human-robot-collaboration combines the strengths of humans, such as\n flexibility and dexterity, as well as the precision and efficiency of the\n cobot. However, small and medium-sized businesses (SMBs) often lack the\n expertise to plan and execute e.g. collaborative assembly processes, which\n still highly depend on manual work. This paper introduces a framework using\n behavior trees (BTs) and computer vision (CV) to simplify this process while\n complying with safety standards. In this way, SMBs are able to benefit from\n automation and become more resilient to global competition. BTs organize the\n behavior of a system in a tree structure [1], [2]. They are modular since\n nodes can be easily added or removed. Condition nodes check if a certain\n condition holds before an action node is executed, which leads to the\n reactivity of the trees. Finally, BTs are intuitive and human-understandable\n and can therefore be used by non-experts [3]. In preliminary works, BTs have\n been implemented for planning and execution of a collaborative assembly\n process [4]. Furthermore, an extension for an efficient task sharing and\n communication between human and cobots was developed in [5] using the Human\n Action Nodes (H-nodes). The H-node is crucial for BTs to handle\n collaborative tasks and reducing idle times. This node requires the use of\n CV for the cobot to recognize, whether the human has finished her sub-task\n and continue with the next one. In order to do so, the algorithm must be\n able to detect different assembly states and map them to the corresponding\n tree nodes. A further use of CV is the detection of assembly parts such as\n screws. This enables the cobot to autonomously recognize and handle specific\n components. Collaboration is the highest level of interaction between humans\n and cobots [4] due to a shared workspace and task. Therefore, it requires\n strict safety standards that are determined in the DIN EN ISO 10218 and DIN\n ISO/TS 15066 [6], [7], which e.g. regulate speed limits for cobots. The\n internal safety functions of cobots have been successfully extended with\n sensors, cameras, and CV algorithms [8]–[10] to avoid collisions with the\n human. The latter approach uses the object detection library OpenCV [11],\n for instance. OpenCV offers a hand detection algorithm, which is pretrained\n with more than 30.000 images of hands. In addition, it allows for a high\n frame rate, which is essential for real-time safety.In this paper, CV is\n used to enhance the CoboTrees (cobots and BTs) demonstrator within the\n Cluster of Excellence ’Internet of Production’ [12]. The demonstrator\n consists of a six degree-of-freedom Doosan M1013 cobot, which is controlled\n by the Robot Operating System (ROS) and two Intel RealSense D435 depth\n cameras. The BTs are modeled using the PyTrees library [13]. Using OpenCV,\n an object and assembly state detection algorithm is implemented e.g. for use\n in the H-nodes. Since the majority of accidents between robots and humans\n occur due to clamping or crushing of the human hand [14], a hand detector is\n implemented. It is evaluated regarding its compliance with existing safety\n standards. The resulting safety subtree integration in ROS is shown in Fig.\n 1.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002912","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Human-robot-collaboration combines the strengths of humans, such as flexibility and dexterity, as well as the precision and efficiency of the cobot. However, small and medium-sized businesses (SMBs) often lack the expertise to plan and execute e.g. collaborative assembly processes, which still highly depend on manual work. This paper introduces a framework using behavior trees (BTs) and computer vision (CV) to simplify this process while complying with safety standards. In this way, SMBs are able to benefit from automation and become more resilient to global competition. BTs organize the behavior of a system in a tree structure [1], [2]. They are modular since nodes can be easily added or removed. Condition nodes check if a certain condition holds before an action node is executed, which leads to the reactivity of the trees. Finally, BTs are intuitive and human-understandable and can therefore be used by non-experts [3]. In preliminary works, BTs have been implemented for planning and execution of a collaborative assembly process [4]. Furthermore, an extension for an efficient task sharing and communication between human and cobots was developed in [5] using the Human Action Nodes (H-nodes). The H-node is crucial for BTs to handle collaborative tasks and reducing idle times. This node requires the use of CV for the cobot to recognize, whether the human has finished her sub-task and continue with the next one. In order to do so, the algorithm must be able to detect different assembly states and map them to the corresponding tree nodes. A further use of CV is the detection of assembly parts such as screws. This enables the cobot to autonomously recognize and handle specific components. Collaboration is the highest level of interaction between humans and cobots [4] due to a shared workspace and task. Therefore, it requires strict safety standards that are determined in the DIN EN ISO 10218 and DIN ISO/TS 15066 [6], [7], which e.g. regulate speed limits for cobots. The internal safety functions of cobots have been successfully extended with sensors, cameras, and CV algorithms [8]–[10] to avoid collisions with the human. The latter approach uses the object detection library OpenCV [11], for instance. OpenCV offers a hand detection algorithm, which is pretrained with more than 30.000 images of hands. In addition, it allows for a high frame rate, which is essential for real-time safety.In this paper, CV is used to enhance the CoboTrees (cobots and BTs) demonstrator within the Cluster of Excellence ’Internet of Production’ [12]. The demonstrator consists of a six degree-of-freedom Doosan M1013 cobot, which is controlled by the Robot Operating System (ROS) and two Intel RealSense D435 depth cameras. The BTs are modeled using the PyTrees library [13]. Using OpenCV, an object and assembly state detection algorithm is implemented e.g. for use in the H-nodes. Since the majority of accidents between robots and humans occur due to clamping or crushing of the human hand [14], a hand detector is implemented. It is evaluated regarding its compliance with existing safety standards. The resulting safety subtree integration in ROS is shown in Fig. 1.
使用行为树和计算机视觉的安全灵活的协同装配过程
人-机器人协作结合了人类的优势,如灵活性和灵巧性,以及协作机器人的精度和效率。然而,中小型企业(smb)往往缺乏计划和执行的专业知识,例如协作组装过程,这仍然高度依赖于手工工作。本文介绍了一个使用行为树(bt)和计算机视觉(CV)的框架,以简化这一过程,同时符合安全标准。通过这种方式,中小企业能够从自动化中受益,并在全球竞争中变得更有弹性。bt以树状结构组织系统的行为[1],[2]。它们是模块化的,因为节点可以很容易地添加或删除。条件节点在执行动作节点之前检查某个条件是否成立,这将导致树的反应性。最后,bt是直观的,人类可以理解的,因此可以被非专家使用[3]。在初步工作中,bt已被用于规划和执行协同装配过程[4]。此外,在[5]中使用人类行动节点(H-nodes)开发了一种扩展,用于人类和协作机器人之间的有效任务共享和通信。h节点对于bt处理协作任务和减少空闲时间至关重要。这个节点需要使用CV来让协作机器人识别人类是否完成了她的子任务,并继续进行下一个任务。为此,该算法必须能够检测不同的装配状态,并将它们映射到相应的树节点。CV的另一个用途是检测装配部件,如螺钉。这使cobot能够自主识别和处理特定组件。协作是人类和协作机器人之间最高级别的交互[4],由于共享工作空间和任务。因此,它需要严格的安全标准,这些标准在DIN EN ISO 10218和DIN ISO/TS 15066[6],[7]中确定,例如规范协作机器人的速度限制。协作机器人的内部安全功能已成功扩展为传感器、摄像头和CV算法[8]-[10],以避免与人类发生碰撞。例如,后一种方法使用对象检测库OpenCV[11]。OpenCV提供了一种手部检测算法,该算法使用超过30,000张手部图像进行预训练。此外,它允许高帧率,这对实时安全至关重要。在本文中,CV用于增强卓越集群“生产互联网”中的CoboTrees(协作机器人和bt)演示器[12]。演示品由机器人操作系统(ROS)控制的6自由度斗山M1013协作机器人和2台英特尔RealSense D435深度摄像头组成。bt使用PyTrees库建模[13]。使用OpenCV,实现了对象和汇编状态检测算法,例如用于h节点。由于机器人与人类之间的大多数事故都是由于人的手被夹住或压碎而发生的[14],因此实现了手部检测器。评估其是否符合现有的安全标准。在ROS中得到的安全子树积分如图1所示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信