工业自动化领域适应性对象检测知识的交叉渗透

IF 2.1 Q3 ROBOTICS
Anwar Ur Rehman, Ignazio Gallo
{"title":"工业自动化领域适应性对象检测知识的交叉渗透","authors":"Anwar Ur Rehman, Ignazio Gallo","doi":"10.1007/s41315-024-00372-9","DOIUrl":null,"url":null,"abstract":"<p>Artificial Intelligence is revolutionizing industries by enhancing efficiency through real-time Object Detection (OD) applications. Utilizing advanced computer vision techniques, OD systems automate processes, analyze complex visual data, and facilitate data-driven decisions, thus increasing productivity. Domain Adaptation for OD has recently gained prominence for its ability to recognize target objects without annotations. Innovative approaches that merge traditional cross-disciplinary domain modeling with cutting-edge deep learning have become essential in addressing complex AI challenges in real-time scenarios. Unlike traditional methods, this study proposes a novel, effective Cross-Pollination of Knowledge (CPK) strategy for domain adaptation inspired by botanical processes. The CPK approach involves merging target samples with source samples at the input stage. By incorporating a random and unique selection of a few target samples, the merging process enhances object detection results efficiently in domain adaptation, supporting detectors in aligning and generalizing features with the source domain. Additionally, this work presents the new Planeat digit recognition dataset, which includes 231 images. To ensure robust comparison, we employ a self-supervised Domain Adaptation (UDA) method that simultaneously trains target and source domains using unsupervised techniques. UDA method leverages target data to identify high-confidence regions, which are then cropped and augmented, adapting UDA for effective OD. The proposed CPK approach significantly outperforms existing UDA techniques, improving mean Average Precision (mAP) by 10.9% through rigorous testing on five diverse datasets across different conditions- cross-weather, cross-camera, and synthetic-to-real. Our code is publicly available https://github.com/anwaar0/CPK-Object-Detection</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"4 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-pollination of knowledge for object detection in domain adaptation for industrial automation\",\"authors\":\"Anwar Ur Rehman, Ignazio Gallo\",\"doi\":\"10.1007/s41315-024-00372-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Artificial Intelligence is revolutionizing industries by enhancing efficiency through real-time Object Detection (OD) applications. Utilizing advanced computer vision techniques, OD systems automate processes, analyze complex visual data, and facilitate data-driven decisions, thus increasing productivity. Domain Adaptation for OD has recently gained prominence for its ability to recognize target objects without annotations. Innovative approaches that merge traditional cross-disciplinary domain modeling with cutting-edge deep learning have become essential in addressing complex AI challenges in real-time scenarios. Unlike traditional methods, this study proposes a novel, effective Cross-Pollination of Knowledge (CPK) strategy for domain adaptation inspired by botanical processes. The CPK approach involves merging target samples with source samples at the input stage. By incorporating a random and unique selection of a few target samples, the merging process enhances object detection results efficiently in domain adaptation, supporting detectors in aligning and generalizing features with the source domain. Additionally, this work presents the new Planeat digit recognition dataset, which includes 231 images. To ensure robust comparison, we employ a self-supervised Domain Adaptation (UDA) method that simultaneously trains target and source domains using unsupervised techniques. UDA method leverages target data to identify high-confidence regions, which are then cropped and augmented, adapting UDA for effective OD. The proposed CPK approach significantly outperforms existing UDA techniques, improving mean Average Precision (mAP) by 10.9% through rigorous testing on five diverse datasets across different conditions- cross-weather, cross-camera, and synthetic-to-real. Our code is publicly available https://github.com/anwaar0/CPK-Object-Detection</p>\",\"PeriodicalId\":44563,\"journal\":{\"name\":\"International Journal of Intelligent Robotics and Applications\",\"volume\":\"4 1\",\"pages\":\"\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Robotics and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s41315-024-00372-9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Robotics and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s41315-024-00372-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能正在通过实时物体检测(OD)应用提高效率,从而为各行各业带来变革。利用先进的计算机视觉技术,OD 系统可实现流程自动化,分析复杂的视觉数据,促进数据驱动型决策,从而提高生产率。最近,用于 OD 的领域适应技术因其在没有注释的情况下识别目标对象的能力而备受瞩目。将传统的跨学科领域建模与前沿的深度学习相结合的创新方法,对于解决实时场景中复杂的人工智能挑战至关重要。与传统方法不同,本研究受植物学过程的启发,提出了一种新颖、有效的知识交叉渗透(CPK)领域适应策略。CPK 方法涉及在输入阶段将目标样本与源样本合并。通过随机、独特地选择一些目标样本,合并过程可在领域适应过程中有效增强目标检测结果,支持检测器根据源领域调整和概括特征。此外,这项工作还提出了新的 Planeat 数字识别数据集,其中包括 231 幅图像。为了确保比较的稳健性,我们采用了一种自监督域适应(UDA)方法,利用无监督技术同时训练目标域和源域。UDA 方法利用目标数据来识别高置信度区域,然后对这些区域进行裁剪和增强,调整 UDA 以实现有效的 OD。所提出的 CPK 方法明显优于现有的 UDA 技术,通过在跨天气、跨摄像头和合成到真实等不同条件的五个不同数据集上进行严格测试,平均精度 (mAP) 提高了 10.9%。我们的代码已公开 https://github.com/anwaar0/CPK-Object-Detection
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Cross-pollination of knowledge for object detection in domain adaptation for industrial automation

Cross-pollination of knowledge for object detection in domain adaptation for industrial automation

Artificial Intelligence is revolutionizing industries by enhancing efficiency through real-time Object Detection (OD) applications. Utilizing advanced computer vision techniques, OD systems automate processes, analyze complex visual data, and facilitate data-driven decisions, thus increasing productivity. Domain Adaptation for OD has recently gained prominence for its ability to recognize target objects without annotations. Innovative approaches that merge traditional cross-disciplinary domain modeling with cutting-edge deep learning have become essential in addressing complex AI challenges in real-time scenarios. Unlike traditional methods, this study proposes a novel, effective Cross-Pollination of Knowledge (CPK) strategy for domain adaptation inspired by botanical processes. The CPK approach involves merging target samples with source samples at the input stage. By incorporating a random and unique selection of a few target samples, the merging process enhances object detection results efficiently in domain adaptation, supporting detectors in aligning and generalizing features with the source domain. Additionally, this work presents the new Planeat digit recognition dataset, which includes 231 images. To ensure robust comparison, we employ a self-supervised Domain Adaptation (UDA) method that simultaneously trains target and source domains using unsupervised techniques. UDA method leverages target data to identify high-confidence regions, which are then cropped and augmented, adapting UDA for effective OD. The proposed CPK approach significantly outperforms existing UDA techniques, improving mean Average Precision (mAP) by 10.9% through rigorous testing on five diverse datasets across different conditions- cross-weather, cross-camera, and synthetic-to-real. Our code is publicly available https://github.com/anwaar0/CPK-Object-Detection

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.80
自引率
5.90%
发文量
50
期刊介绍: The International Journal of Intelligent Robotics and Applications (IJIRA) fosters the dissemination of new discoveries and novel technologies that advance developments in robotics and their broad applications. This journal provides a publication and communication platform for all robotics topics, from the theoretical fundamentals and technological advances to various applications including manufacturing, space vehicles, biomedical systems and automobiles, data-storage devices, healthcare systems, home appliances, and intelligent highways. IJIRA welcomes contributions from researchers, professionals and industrial practitioners. It publishes original, high-quality and previously unpublished research papers, brief reports, and critical reviews. Specific areas of interest include, but are not limited to:Advanced actuators and sensorsCollective and social robots Computing, communication and controlDesign, modeling and prototypingHuman and robot interactionMachine learning and intelligenceMobile robots and intelligent autonomous systemsMulti-sensor fusion and perceptionPlanning, navigation and localizationRobot intelligence, learning and linguisticsRobotic vision, recognition and reconstructionBio-mechatronics and roboticsCloud and Swarm roboticsCognitive and neuro roboticsExploration and security roboticsHealthcare, medical and assistive roboticsRobotics for intelligent manufacturingService, social and entertainment roboticsSpace and underwater robotsNovel and emerging applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信