Shixin Peng, Can Xiong, Leyuan Liu, Laurence T. Yang, Jingying Chen
{"title":"GRPIC:使用三种视觉特征的端到端图像字幕模型","authors":"Shixin Peng, Can Xiong, Leyuan Liu, Laurence T. Yang, Jingying Chen","doi":"10.1007/s13042-024-02352-8","DOIUrl":null,"url":null,"abstract":"<p>lmage captioning is a multimodal task involving both computer vision and natural language processing. Recently, there has been a substantial improvement in the performance of image captioning with the introduction of multi-feature extraction methods. However, existing single-feature and multi-feature methods still face challenges such as a low refinement degree, weak feature complementarity, and lack of an end-to-end model. To tackle these issues, we propose an end-to-end image captioning model called GRPIC (Grid-Region-Pixel Image Captioning), which integrates three types of image features: region features, grid features, and pixel features. Our model utilizes the Swin Transformer for extracting grid features, DETR for extracting region features, and Deeplab for extracting pixel features. We merge pixel-level features with region and grid features to extract more refined contextual and detailed information. Additionally, we incorporate absolute position information and pairwise align the three features to fully leverage their complementarity. Qualitative and quantitative experiments conducted on the MSCOCO dataset demonstrate that our model achieved a 2.3% improvement in CIDEr, reaching 136.1 CIDEr compared to traditional dual-feature methods on the Karpathy test split. Furthermore, observation of the actual generated descriptions shows that the model also produced more refined captions.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":"24 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GRPIC: an end-to-end image captioning model using three visual features\",\"authors\":\"Shixin Peng, Can Xiong, Leyuan Liu, Laurence T. Yang, Jingying Chen\",\"doi\":\"10.1007/s13042-024-02352-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>lmage captioning is a multimodal task involving both computer vision and natural language processing. Recently, there has been a substantial improvement in the performance of image captioning with the introduction of multi-feature extraction methods. However, existing single-feature and multi-feature methods still face challenges such as a low refinement degree, weak feature complementarity, and lack of an end-to-end model. To tackle these issues, we propose an end-to-end image captioning model called GRPIC (Grid-Region-Pixel Image Captioning), which integrates three types of image features: region features, grid features, and pixel features. Our model utilizes the Swin Transformer for extracting grid features, DETR for extracting region features, and Deeplab for extracting pixel features. We merge pixel-level features with region and grid features to extract more refined contextual and detailed information. Additionally, we incorporate absolute position information and pairwise align the three features to fully leverage their complementarity. Qualitative and quantitative experiments conducted on the MSCOCO dataset demonstrate that our model achieved a 2.3% improvement in CIDEr, reaching 136.1 CIDEr compared to traditional dual-feature methods on the Karpathy test split. Furthermore, observation of the actual generated descriptions shows that the model also produced more refined captions.</p>\",\"PeriodicalId\":51327,\"journal\":{\"name\":\"International Journal of Machine Learning and Cybernetics\",\"volume\":\"24 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Machine Learning and Cybernetics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s13042-024-02352-8\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Machine Learning and Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s13042-024-02352-8","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
GRPIC: an end-to-end image captioning model using three visual features
lmage captioning is a multimodal task involving both computer vision and natural language processing. Recently, there has been a substantial improvement in the performance of image captioning with the introduction of multi-feature extraction methods. However, existing single-feature and multi-feature methods still face challenges such as a low refinement degree, weak feature complementarity, and lack of an end-to-end model. To tackle these issues, we propose an end-to-end image captioning model called GRPIC (Grid-Region-Pixel Image Captioning), which integrates three types of image features: region features, grid features, and pixel features. Our model utilizes the Swin Transformer for extracting grid features, DETR for extracting region features, and Deeplab for extracting pixel features. We merge pixel-level features with region and grid features to extract more refined contextual and detailed information. Additionally, we incorporate absolute position information and pairwise align the three features to fully leverage their complementarity. Qualitative and quantitative experiments conducted on the MSCOCO dataset demonstrate that our model achieved a 2.3% improvement in CIDEr, reaching 136.1 CIDEr compared to traditional dual-feature methods on the Karpathy test split. Furthermore, observation of the actual generated descriptions shows that the model also produced more refined captions.
期刊介绍:
Cybernetics is concerned with describing complex interactions and interrelationships between systems which are omnipresent in our daily life. Machine Learning discovers fundamental functional relationships between variables and ensembles of variables in systems. The merging of the disciplines of Machine Learning and Cybernetics is aimed at the discovery of various forms of interaction between systems through diverse mechanisms of learning from data.
The International Journal of Machine Learning and Cybernetics (IJMLC) focuses on the key research problems emerging at the junction of machine learning and cybernetics and serves as a broad forum for rapid dissemination of the latest advancements in the area. The emphasis of IJMLC is on the hybrid development of machine learning and cybernetics schemes inspired by different contributing disciplines such as engineering, mathematics, cognitive sciences, and applications. New ideas, design alternatives, implementations and case studies pertaining to all the aspects of machine learning and cybernetics fall within the scope of the IJMLC.
Key research areas to be covered by the journal include:
Machine Learning for modeling interactions between systems
Pattern Recognition technology to support discovery of system-environment interaction
Control of system-environment interactions
Biochemical interaction in biological and biologically-inspired systems
Learning for improvement of communication schemes between systems