Taegeon Kim , Wei-Chih Chern , Seokhwan Kim , Vijayan K. Asari , Hongjo Kim
{"title":"从目标域生成训练数据的移动特征驱动标签传播","authors":"Taegeon Kim , Wei-Chih Chern , Seokhwan Kim , Vijayan K. Asari , Hongjo Kim","doi":"10.1016/j.compind.2025.104335","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning models often suffer from performance degradation when applied to construction sites that differ from the source domain due to their sensitivity to data distribution shifts. Although methods such as transfer learning, domain adaptation, and synthetic data generation have been explored to improve generalization, collecting and annotating data from new target domains remains a labor-intensive bottleneck. This study presents a self-training-based framework to generate training data for construction object detection in unlabeled target domains. The method identifies moving objects using optical flow estimation, propagates class labels through iterative self-training, and synthesizes realistic training images via image inpainting and copy-paste augmentation. Experimental results from four visually distinct construction scenes demonstrate that the proposed method significantly improves detection performance without relying on manually labeled target data. These findings contribute to advancing automated and scalable domain adaptation techniques for vision-based construction monitoring.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"171 ","pages":"Article 104335"},"PeriodicalIF":9.1000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Moving-feature-driven label propagation for training data generation from target domains\",\"authors\":\"Taegeon Kim , Wei-Chih Chern , Seokhwan Kim , Vijayan K. Asari , Hongjo Kim\",\"doi\":\"10.1016/j.compind.2025.104335\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep learning models often suffer from performance degradation when applied to construction sites that differ from the source domain due to their sensitivity to data distribution shifts. Although methods such as transfer learning, domain adaptation, and synthetic data generation have been explored to improve generalization, collecting and annotating data from new target domains remains a labor-intensive bottleneck. This study presents a self-training-based framework to generate training data for construction object detection in unlabeled target domains. The method identifies moving objects using optical flow estimation, propagates class labels through iterative self-training, and synthesizes realistic training images via image inpainting and copy-paste augmentation. Experimental results from four visually distinct construction scenes demonstrate that the proposed method significantly improves detection performance without relying on manually labeled target data. These findings contribute to advancing automated and scalable domain adaptation techniques for vision-based construction monitoring.</div></div>\",\"PeriodicalId\":55219,\"journal\":{\"name\":\"Computers in Industry\",\"volume\":\"171 \",\"pages\":\"Article 104335\"},\"PeriodicalIF\":9.1000,\"publicationDate\":\"2025-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Industry\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0166361525001009\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Industry","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0166361525001009","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Moving-feature-driven label propagation for training data generation from target domains
Deep learning models often suffer from performance degradation when applied to construction sites that differ from the source domain due to their sensitivity to data distribution shifts. Although methods such as transfer learning, domain adaptation, and synthetic data generation have been explored to improve generalization, collecting and annotating data from new target domains remains a labor-intensive bottleneck. This study presents a self-training-based framework to generate training data for construction object detection in unlabeled target domains. The method identifies moving objects using optical flow estimation, propagates class labels through iterative self-training, and synthesizes realistic training images via image inpainting and copy-paste augmentation. Experimental results from four visually distinct construction scenes demonstrate that the proposed method significantly improves detection performance without relying on manually labeled target data. These findings contribute to advancing automated and scalable domain adaptation techniques for vision-based construction monitoring.
期刊介绍:
The objective of Computers in Industry is to present original, high-quality, application-oriented research papers that:
• Illuminate emerging trends and possibilities in the utilization of Information and Communication Technology in industry;
• Establish connections or integrations across various technology domains within the expansive realm of computer applications for industry;
• Foster connections or integrations across diverse application areas of ICT in industry.