基于VCFN-YOLOv8框架的软目标抓取评估

IF 5.4
Guoshun Cui , Shiwei Su , Hanyu Gao , Kai Zhuo , Kun Yang , Hang Wu
{"title":"基于VCFN-YOLOv8框架的软目标抓取评估","authors":"Guoshun Cui ,&nbsp;Shiwei Su ,&nbsp;Hanyu Gao ,&nbsp;Kai Zhuo ,&nbsp;Kun Yang ,&nbsp;Hang Wu","doi":"10.1016/j.birob.2025.100232","DOIUrl":null,"url":null,"abstract":"<div><div>Humans can quickly perform adaptive grasping of soft objects by using visual perception and judgment of the grasping angle, which helps prevent the objects from sliding or deforming excessively. However, this easy task remains a challenge for robots. The grasping states of soft objects can be categorized into four types: sliding, appropriate, excessive and extreme. Effective recognition of different states is crucial for achieving adaptive grasping of soft objects. To address this problem, a novel visual-curvature fusion network based on YOLOv8 (VCFN-YOLOv8) is proposed to evaluate the grasping state of various soft objects. In this framework, the robotic arm equipped with the wrist camera and the curvature sensor is established to perform generalization grasping and lifting experiments on 11 different objects. Meanwhile, the dataset is built for training and testing the proposed method. The results show a classification accuracy of 99.51% on four different grasping states. A series of grasping evaluation experiments is conducted based on the proposed framework, along with tests for the model’s generality. The experiment results demonstrate that VCFN-YOLOv8 is accurate and efficient in evaluating the grasping state of soft objects and shows a certain degree of generalization for non-soft objects. It can be widely applied in fields such as automatic control, adaptive grasping and surgical robot.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100232"},"PeriodicalIF":5.4000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Soft objects grasping evaluation using a novel VCFN-YOLOv8 framework\",\"authors\":\"Guoshun Cui ,&nbsp;Shiwei Su ,&nbsp;Hanyu Gao ,&nbsp;Kai Zhuo ,&nbsp;Kun Yang ,&nbsp;Hang Wu\",\"doi\":\"10.1016/j.birob.2025.100232\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Humans can quickly perform adaptive grasping of soft objects by using visual perception and judgment of the grasping angle, which helps prevent the objects from sliding or deforming excessively. However, this easy task remains a challenge for robots. The grasping states of soft objects can be categorized into four types: sliding, appropriate, excessive and extreme. Effective recognition of different states is crucial for achieving adaptive grasping of soft objects. To address this problem, a novel visual-curvature fusion network based on YOLOv8 (VCFN-YOLOv8) is proposed to evaluate the grasping state of various soft objects. In this framework, the robotic arm equipped with the wrist camera and the curvature sensor is established to perform generalization grasping and lifting experiments on 11 different objects. Meanwhile, the dataset is built for training and testing the proposed method. The results show a classification accuracy of 99.51% on four different grasping states. A series of grasping evaluation experiments is conducted based on the proposed framework, along with tests for the model’s generality. The experiment results demonstrate that VCFN-YOLOv8 is accurate and efficient in evaluating the grasping state of soft objects and shows a certain degree of generalization for non-soft objects. It can be widely applied in fields such as automatic control, adaptive grasping and surgical robot.</div></div>\",\"PeriodicalId\":100184,\"journal\":{\"name\":\"Biomimetic Intelligence and Robotics\",\"volume\":\"5 3\",\"pages\":\"Article 100232\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomimetic Intelligence and Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667379725000233\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetic Intelligence and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667379725000233","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人类通过视觉感知和对抓取角度的判断,可以快速地对柔软物体进行自适应抓取,从而防止物体过度滑动或变形。然而,这项简单的任务对机器人来说仍然是一个挑战。软性物体的抓取状态可分为滑动、适度、过度和极端四种。对不同状态的有效识别是实现软物体自适应抓取的关键。针对这一问题,提出了一种基于YOLOv8的视觉曲率融合网络(VCFN-YOLOv8)来评估各种软物体的抓取状态。在该框架下,建立了配备腕部相机和曲率传感器的机械臂,对11个不同的物体进行泛化抓取和提升实验。同时,建立数据集用于训练和测试所提出的方法。结果表明,在四种不同抓取状态下,分类准确率达到99.51%。基于所提出的框架进行了一系列抓取评价实验,并对模型的通用性进行了测试。实验结果表明,VCFN-YOLOv8对软性物体的抓取状态评价准确、高效,对非软性物体具有一定的泛化能力。可广泛应用于自动控制、自适应抓取、手术机器人等领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Soft objects grasping evaluation using a novel VCFN-YOLOv8 framework
Humans can quickly perform adaptive grasping of soft objects by using visual perception and judgment of the grasping angle, which helps prevent the objects from sliding or deforming excessively. However, this easy task remains a challenge for robots. The grasping states of soft objects can be categorized into four types: sliding, appropriate, excessive and extreme. Effective recognition of different states is crucial for achieving adaptive grasping of soft objects. To address this problem, a novel visual-curvature fusion network based on YOLOv8 (VCFN-YOLOv8) is proposed to evaluate the grasping state of various soft objects. In this framework, the robotic arm equipped with the wrist camera and the curvature sensor is established to perform generalization grasping and lifting experiments on 11 different objects. Meanwhile, the dataset is built for training and testing the proposed method. The results show a classification accuracy of 99.51% on four different grasping states. A series of grasping evaluation experiments is conducted based on the proposed framework, along with tests for the model’s generality. The experiment results demonstrate that VCFN-YOLOv8 is accurate and efficient in evaluating the grasping state of soft objects and shows a certain degree of generalization for non-soft objects. It can be widely applied in fields such as automatic control, adaptive grasping and surgical robot.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信