vieCap4H Challenge 2021: A transformer-based method for Healthcare Image Captioning in Vietnamese

Doanh Bui Cao, Truc Thi Thanh Trinh, Trong-Thuan Nguyen, V. Nguyen, Nguyen D. Vo
{"title":"vieCap4H Challenge 2021: A transformer-based method for Healthcare Image Captioning in Vietnamese","authors":"Doanh Bui Cao, Truc Thi Thanh Trinh, Trong-Thuan Nguyen, V. Nguyen, Nguyen D. Vo","doi":"10.25073/2588-1086/vnucsce.371","DOIUrl":null,"url":null,"abstract":"The automatic image caption generation is attractive to both Computer Vision and Natural Language Processing research community because it lies in the gap between these two fields. Within the vieCap4H contest organized by VLSP 2021, we participate and present a Transformer-based solution for image captioning in the healthcare domain. In detail, we use grid features as visual presentation and pre-training a BERT-based language model from PhoBERT-base pre-trained model to obtain language presentation used in the Adaptive Decoder module in the RSTNet model. Besides, we indicate a suitable schedule with the self-critical training sequence (SCST) technique to achieve the best results. Through experiments, we achieve an average of 30.3% BLEU score on the public-test round and 28.9% on the private-test round, which ranks 3rd and 4th, respectively. Source code is available at https://github.com/caodoanh2001/uit-vlsp-viecap4h-solution. \n ","PeriodicalId":416488,"journal":{"name":"VNU Journal of Science: Computer Science and Communication Engineering","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"VNU Journal of Science: Computer Science and Communication Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25073/2588-1086/vnucsce.371","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The automatic image caption generation is attractive to both Computer Vision and Natural Language Processing research community because it lies in the gap between these two fields. Within the vieCap4H contest organized by VLSP 2021, we participate and present a Transformer-based solution for image captioning in the healthcare domain. In detail, we use grid features as visual presentation and pre-training a BERT-based language model from PhoBERT-base pre-trained model to obtain language presentation used in the Adaptive Decoder module in the RSTNet model. Besides, we indicate a suitable schedule with the self-critical training sequence (SCST) technique to achieve the best results. Through experiments, we achieve an average of 30.3% BLEU score on the public-test round and 28.9% on the private-test round, which ranks 3rd and 4th, respectively. Source code is available at https://github.com/caodoanh2001/uit-vlsp-viecap4h-solution.  
vieCap4H挑战2021:越南医疗保健图像字幕的基于转换器的方法
由于图像标题的自动生成处于计算机视觉和自然语言处理两个领域的空白地带,因此受到了计算机视觉和自然语言处理研究领域的广泛关注。在VLSP 2021组织的vieCap4H竞赛中,我们参与并展示了一个基于transformer的医疗保健领域图像字幕解决方案。具体来说,我们使用网格特征作为视觉表示,并从基于phobert的预训练模型中预训练一个基于bert的语言模型,以获得RSTNet模型中Adaptive Decoder模块使用的语言表示。此外,我们提出了一个合适的时间表与自我批判训练序列(SCST)技术,以达到最佳效果。通过实验,我们在公测轮和私测轮的BLEU平均分分别达到30.3%和28.9%,分别排名第3和第4。源代码可从https://github.com/caodoanh2001/uit-vlsp-viecap4h-solution获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信