{"title":"通过注意力胶囊网络将物体姿态关系注入图像字幕","authors":"Hong Yu, Yuanqiu Liu, Hui Li, Xin Han, Han Liu","doi":"10.1016/j.asoc.2025.113310","DOIUrl":null,"url":null,"abstract":"<div><div>Image captioning is a fundamental bridge linking computer vision and natural language processing. State-of-the-art methods mainly focus on improving the learning of image features using visual-based attention mechanisms. However, they are limited by the immutable attention parameters and cannot capture spatial relationships of salient objects in an image adequately. To fill this gap, we propose an Attentive Capsule Network (ACN) for image captioning, which can well utilize the spatial information especially positional relationships delivered in an image to generate more accurate and detailed descriptions. In particular, the proposed ACN model is composed of a channel-wise bilinear attention block and an attentive capsule block. The channel-wise bilinear attention block helps to obtain the 2nd order correlations of each feature channel; while the attentive capsule block treats region-level image features as capsules to further capture the hierarchical pose relationships via transformation matrices. To our best knowledge, this is the first work to explore the image captioning task by utilizing capsule networks. Extensive experiments show that our ACN model can achieve remarkable performance, with the competitive CIDEr performance of 133.7% on the MS-COCO Karpathy test split.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"179 ","pages":"Article 113310"},"PeriodicalIF":7.2000,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Injecting object pose relationships into image captioning via attention capsule networks\",\"authors\":\"Hong Yu, Yuanqiu Liu, Hui Li, Xin Han, Han Liu\",\"doi\":\"10.1016/j.asoc.2025.113310\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Image captioning is a fundamental bridge linking computer vision and natural language processing. State-of-the-art methods mainly focus on improving the learning of image features using visual-based attention mechanisms. However, they are limited by the immutable attention parameters and cannot capture spatial relationships of salient objects in an image adequately. To fill this gap, we propose an Attentive Capsule Network (ACN) for image captioning, which can well utilize the spatial information especially positional relationships delivered in an image to generate more accurate and detailed descriptions. In particular, the proposed ACN model is composed of a channel-wise bilinear attention block and an attentive capsule block. The channel-wise bilinear attention block helps to obtain the 2nd order correlations of each feature channel; while the attentive capsule block treats region-level image features as capsules to further capture the hierarchical pose relationships via transformation matrices. To our best knowledge, this is the first work to explore the image captioning task by utilizing capsule networks. Extensive experiments show that our ACN model can achieve remarkable performance, with the competitive CIDEr performance of 133.7% on the MS-COCO Karpathy test split.</div></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":\"179 \",\"pages\":\"Article 113310\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494625006210\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625006210","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
图像字幕是连接计算机视觉和自然语言处理的基本桥梁。最先进的方法主要集中在使用基于视觉的注意机制来提高图像特征的学习。然而,它们受到不可变的注意力参数的限制,不能充分捕捉图像中显著物体的空间关系。为了填补这一空白,我们提出了一种用于图像字幕的细心胶囊网络(attention Capsule Network, ACN),它可以很好地利用图像中传递的空间信息,特别是位置关系,来生成更准确、更详细的描述。特别地,提出的ACN模型由一个通道双线性注意块和一个注意胶囊块组成。通道双线性注意块有助于获得每个特征通道的二阶相关性;而细心的胶囊块将区域级图像特征作为胶囊,通过变换矩阵进一步捕获层次姿态关系。据我们所知,这是第一个利用胶囊网络探索图像字幕任务的工作。大量的实验表明,我们的ACN模型可以取得显著的性能,在MS-COCO Karpathy test split上的竞争CIDEr性能达到了133.7%。
Injecting object pose relationships into image captioning via attention capsule networks
Image captioning is a fundamental bridge linking computer vision and natural language processing. State-of-the-art methods mainly focus on improving the learning of image features using visual-based attention mechanisms. However, they are limited by the immutable attention parameters and cannot capture spatial relationships of salient objects in an image adequately. To fill this gap, we propose an Attentive Capsule Network (ACN) for image captioning, which can well utilize the spatial information especially positional relationships delivered in an image to generate more accurate and detailed descriptions. In particular, the proposed ACN model is composed of a channel-wise bilinear attention block and an attentive capsule block. The channel-wise bilinear attention block helps to obtain the 2nd order correlations of each feature channel; while the attentive capsule block treats region-level image features as capsules to further capture the hierarchical pose relationships via transformation matrices. To our best knowledge, this is the first work to explore the image captioning task by utilizing capsule networks. Extensive experiments show that our ACN model can achieve remarkable performance, with the competitive CIDEr performance of 133.7% on the MS-COCO Karpathy test split.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.