{"title":"Cross-pollination of knowledge for object detection in domain adaptation for industrial automation","authors":"Anwar Ur Rehman, Ignazio Gallo","doi":"10.1007/s41315-024-00372-9","DOIUrl":null,"url":null,"abstract":"<p>Artificial Intelligence is revolutionizing industries by enhancing efficiency through real-time Object Detection (OD) applications. Utilizing advanced computer vision techniques, OD systems automate processes, analyze complex visual data, and facilitate data-driven decisions, thus increasing productivity. Domain Adaptation for OD has recently gained prominence for its ability to recognize target objects without annotations. Innovative approaches that merge traditional cross-disciplinary domain modeling with cutting-edge deep learning have become essential in addressing complex AI challenges in real-time scenarios. Unlike traditional methods, this study proposes a novel, effective Cross-Pollination of Knowledge (CPK) strategy for domain adaptation inspired by botanical processes. The CPK approach involves merging target samples with source samples at the input stage. By incorporating a random and unique selection of a few target samples, the merging process enhances object detection results efficiently in domain adaptation, supporting detectors in aligning and generalizing features with the source domain. Additionally, this work presents the new Planeat digit recognition dataset, which includes 231 images. To ensure robust comparison, we employ a self-supervised Domain Adaptation (UDA) method that simultaneously trains target and source domains using unsupervised techniques. UDA method leverages target data to identify high-confidence regions, which are then cropped and augmented, adapting UDA for effective OD. The proposed CPK approach significantly outperforms existing UDA techniques, improving mean Average Precision (mAP) by 10.9% through rigorous testing on five diverse datasets across different conditions- cross-weather, cross-camera, and synthetic-to-real. Our code is publicly available https://github.com/anwaar0/CPK-Object-Detection</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"4 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Robotics and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s41315-024-00372-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial Intelligence is revolutionizing industries by enhancing efficiency through real-time Object Detection (OD) applications. Utilizing advanced computer vision techniques, OD systems automate processes, analyze complex visual data, and facilitate data-driven decisions, thus increasing productivity. Domain Adaptation for OD has recently gained prominence for its ability to recognize target objects without annotations. Innovative approaches that merge traditional cross-disciplinary domain modeling with cutting-edge deep learning have become essential in addressing complex AI challenges in real-time scenarios. Unlike traditional methods, this study proposes a novel, effective Cross-Pollination of Knowledge (CPK) strategy for domain adaptation inspired by botanical processes. The CPK approach involves merging target samples with source samples at the input stage. By incorporating a random and unique selection of a few target samples, the merging process enhances object detection results efficiently in domain adaptation, supporting detectors in aligning and generalizing features with the source domain. Additionally, this work presents the new Planeat digit recognition dataset, which includes 231 images. To ensure robust comparison, we employ a self-supervised Domain Adaptation (UDA) method that simultaneously trains target and source domains using unsupervised techniques. UDA method leverages target data to identify high-confidence regions, which are then cropped and augmented, adapting UDA for effective OD. The proposed CPK approach significantly outperforms existing UDA techniques, improving mean Average Precision (mAP) by 10.9% through rigorous testing on five diverse datasets across different conditions- cross-weather, cross-camera, and synthetic-to-real. Our code is publicly available https://github.com/anwaar0/CPK-Object-Detection
期刊介绍:
The International Journal of Intelligent Robotics and Applications (IJIRA) fosters the dissemination of new discoveries and novel technologies that advance developments in robotics and their broad applications. This journal provides a publication and communication platform for all robotics topics, from the theoretical fundamentals and technological advances to various applications including manufacturing, space vehicles, biomedical systems and automobiles, data-storage devices, healthcare systems, home appliances, and intelligent highways. IJIRA welcomes contributions from researchers, professionals and industrial practitioners. It publishes original, high-quality and previously unpublished research papers, brief reports, and critical reviews. Specific areas of interest include, but are not limited to:Advanced actuators and sensorsCollective and social robots Computing, communication and controlDesign, modeling and prototypingHuman and robot interactionMachine learning and intelligenceMobile robots and intelligent autonomous systemsMulti-sensor fusion and perceptionPlanning, navigation and localizationRobot intelligence, learning and linguisticsRobotic vision, recognition and reconstructionBio-mechatronics and roboticsCloud and Swarm roboticsCognitive and neuro roboticsExploration and security roboticsHealthcare, medical and assistive roboticsRobotics for intelligent manufacturingService, social and entertainment roboticsSpace and underwater robotsNovel and emerging applications