{"title":"图像说明使用GRIT,便携式ResNet和BART上下文调整进行增强","authors":"Wuyang Zhang, Jianming Ma","doi":"10.1109/UV56588.2022.10185494","DOIUrl":null,"url":null,"abstract":"This paper aims to create an image captioning novel architecture that infuses Grid and Region-based image caption transformer, ResNet, and BART language model to offer a more detail-oriented image captioning model. Conventional state-of-the-art image captioning models mainly focuses on region-based features. They rely on decent object detector architectures like Faster R-CNN to extract object-level information to describe the image’s content. Nevertheless, they cannot remove contextual information, high computational costs, and the ability to introduce in-depth external details of objects presented in the images—the replacement of conventional CNN-based detectors results in faster computation. The experiment can generate image captions comparatively fast with higher accuracy and details with contextual information.","PeriodicalId":211011,"journal":{"name":"2022 6th International Conference on Universal Village (UV)","volume":"187 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Image Caption Enhancement with GRIT, Portable ResNet and BART Context-Tuning\",\"authors\":\"Wuyang Zhang, Jianming Ma\",\"doi\":\"10.1109/UV56588.2022.10185494\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper aims to create an image captioning novel architecture that infuses Grid and Region-based image caption transformer, ResNet, and BART language model to offer a more detail-oriented image captioning model. Conventional state-of-the-art image captioning models mainly focuses on region-based features. They rely on decent object detector architectures like Faster R-CNN to extract object-level information to describe the image’s content. Nevertheless, they cannot remove contextual information, high computational costs, and the ability to introduce in-depth external details of objects presented in the images—the replacement of conventional CNN-based detectors results in faster computation. The experiment can generate image captions comparatively fast with higher accuracy and details with contextual information.\",\"PeriodicalId\":211011,\"journal\":{\"name\":\"2022 6th International Conference on Universal Village (UV)\",\"volume\":\"187 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 6th International Conference on Universal Village (UV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UV56588.2022.10185494\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th International Conference on Universal Village (UV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UV56588.2022.10185494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Image Caption Enhancement with GRIT, Portable ResNet and BART Context-Tuning
This paper aims to create an image captioning novel architecture that infuses Grid and Region-based image caption transformer, ResNet, and BART language model to offer a more detail-oriented image captioning model. Conventional state-of-the-art image captioning models mainly focuses on region-based features. They rely on decent object detector architectures like Faster R-CNN to extract object-level information to describe the image’s content. Nevertheless, they cannot remove contextual information, high computational costs, and the ability to introduce in-depth external details of objects presented in the images—the replacement of conventional CNN-based detectors results in faster computation. The experiment can generate image captions comparatively fast with higher accuracy and details with contextual information.