Wei Zhou , Chuanle Song , Dihu Chen , Tao Su , Haifeng Hu , Chun Shan
{"title":"From multi-scale grids to dynamic regions: Dual-relation enhanced transformer for image captioning","authors":"Wei Zhou , Chuanle Song , Dihu Chen , Tao Su , Haifeng Hu , Chun Shan","doi":"10.1016/j.knosys.2025.113127","DOIUrl":null,"url":null,"abstract":"<div><div>The purpose of image captioning is to describe the visual content of an image in an accurate and natural sentence. Some previous methods adopt convolutional networks to encode grid-level features, whereas others use an object detector to extract region-level features. However, the spatial resolution of high-level grid features is typically low, thus capturing small-scale objects is challenging for such models. In addition, most region-based methods directly set the same number of regions to represent all images, failing to account for varying scene complexities across different images. They introduce noise in region relationship modeling and disrupt sentence reasoning. To address these issues, we propose a novel <strong>D</strong>ual-<strong>R</strong>elation <strong>E</strong>nhanced <strong>T</strong>ransformer (<strong>DRET</strong>) model that complements the advantages of multi-scale grid and dynamic region features. In the encoding phase, we first apply multiple sampling strategies to generate multi-scale grid features, then design a novel multi-scale grid attention (MGA) encoder that learns the relationships between features at different scales. Meanwhile, a new dynamic region selection (DRS) encoder is devised to dynamically select an appropriate number of regions based on the scene complexity of each input image, effectively pruning redundant regions and enhancing correlations between selected regions. In the decoding stage, we combine the advantages of grid and region features, using a cross-modal adaptive gating (CAG) decoder that automatically determines the gate weights of the two visual features at each time step. Extensive experiments on MS-COCO and Flickr30K show that our model achieves better performance compared to current methods.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"311 ","pages":"Article 113127"},"PeriodicalIF":7.2000,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125001741","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The purpose of image captioning is to describe the visual content of an image in an accurate and natural sentence. Some previous methods adopt convolutional networks to encode grid-level features, whereas others use an object detector to extract region-level features. However, the spatial resolution of high-level grid features is typically low, thus capturing small-scale objects is challenging for such models. In addition, most region-based methods directly set the same number of regions to represent all images, failing to account for varying scene complexities across different images. They introduce noise in region relationship modeling and disrupt sentence reasoning. To address these issues, we propose a novel Dual-Relation Enhanced Transformer (DRET) model that complements the advantages of multi-scale grid and dynamic region features. In the encoding phase, we first apply multiple sampling strategies to generate multi-scale grid features, then design a novel multi-scale grid attention (MGA) encoder that learns the relationships between features at different scales. Meanwhile, a new dynamic region selection (DRS) encoder is devised to dynamically select an appropriate number of regions based on the scene complexity of each input image, effectively pruning redundant regions and enhancing correlations between selected regions. In the decoding stage, we combine the advantages of grid and region features, using a cross-modal adaptive gating (CAG) decoder that automatically determines the gate weights of the two visual features at each time step. Extensive experiments on MS-COCO and Flickr30K show that our model achieves better performance compared to current methods.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.