Yashuo Bai, Yong Song, Fei Dong, Xu Li, Ya Zhou, Yizhao Liao, Jinxiang Huang, Xin Yang
{"title":"KeyBoxGAN: enhancing 2D object detection through annotated and editable image synthesis","authors":"Yashuo Bai, Yong Song, Fei Dong, Xu Li, Ya Zhou, Yizhao Liao, Jinxiang Huang, Xin Yang","doi":"10.1007/s40747-025-01817-9","DOIUrl":null,"url":null,"abstract":"<p>Sample augmentation, especially sample generation is conducive for addressing the challenge of training robust image and video object detection models based on the deep learning. Still, the existing methods lack sample editing capability and suffer from annotation work. This paper proposes an image sample generation method based on key box points detection and Generative adversarial network (GAN), named as KeyBoxGAN, to make image sample generation labeled and editable. KeyBoxGAN firstly predefines key box points positions, embeddings which control the objects’ positions and then the corresponding masks are generated according to Mahalanobis–Gaussuan heatmaps and Swin Transformer-SPADE generator to control objects’ generation regions, as well as the background generation. This adaptive and precisely supervised image generation method disentangles object position and appearance, enables image editable and self-labeled abilities. The experiments show KeyBoxGAN surpasses DCGAN, StyleGAN2 and DDPM in objective assessments, including Inception Distance (FID), Inception Score (IS), and Multi-Scale Structural Similarity Index (MS-SSIM), as well as in subjective evaluations by showing better visual quality. Moreover, the editable and self-labeled image generation capabilities make it a valuable tool in addressing challenges like occlusion, deformation, and varying environmental conditions in the 2D object detection.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"66 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01817-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Sample augmentation, especially sample generation is conducive for addressing the challenge of training robust image and video object detection models based on the deep learning. Still, the existing methods lack sample editing capability and suffer from annotation work. This paper proposes an image sample generation method based on key box points detection and Generative adversarial network (GAN), named as KeyBoxGAN, to make image sample generation labeled and editable. KeyBoxGAN firstly predefines key box points positions, embeddings which control the objects’ positions and then the corresponding masks are generated according to Mahalanobis–Gaussuan heatmaps and Swin Transformer-SPADE generator to control objects’ generation regions, as well as the background generation. This adaptive and precisely supervised image generation method disentangles object position and appearance, enables image editable and self-labeled abilities. The experiments show KeyBoxGAN surpasses DCGAN, StyleGAN2 and DDPM in objective assessments, including Inception Distance (FID), Inception Score (IS), and Multi-Scale Structural Similarity Index (MS-SSIM), as well as in subjective evaluations by showing better visual quality. Moreover, the editable and self-labeled image generation capabilities make it a valuable tool in addressing challenges like occlusion, deformation, and varying environmental conditions in the 2D object detection.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.