Multimodal Machine Translation Enhancement by Fusing Multimodal-attention and Fine-grained Image Features

Lin Li, Turghun Tayir
{"title":"Multimodal Machine Translation Enhancement by Fusing Multimodal-attention and Fine-grained Image Features","authors":"Lin Li, Turghun Tayir","doi":"10.1109/MIPR51284.2021.00050","DOIUrl":null,"url":null,"abstract":"With recent development of the multimodal machine translation (MMT) network architectures, recurrent models have effectively been replaced by attention mechanism and the translation results have been enhanced with the assistance of fine-grained image information. Although attention is a powerful and ubiquitous mechanism, different number of attention heads and granularity image features aligned by attention have an impact on the quality of multimodal machine translation. In order to address above problems, this paper proposes a multimodal machine translation enhancement by fusing multimodal-attention and fine-grained image features method which builds some submodels by introducing different granularity of image features to the multimodal-attention mechanism with different number of heads. Moreover, these sub-models are randomly fused and fusion models are obtained. The experimental results on the Multi30k dataset that the pruned attention heads lead to the improvement of translation results. Finally, our fusion model obtained the best results according to the automatic evaluation metrics BLEU compared with sub-models and some baselines.","PeriodicalId":139543,"journal":{"name":"2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MIPR51284.2021.00050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

With recent development of the multimodal machine translation (MMT) network architectures, recurrent models have effectively been replaced by attention mechanism and the translation results have been enhanced with the assistance of fine-grained image information. Although attention is a powerful and ubiquitous mechanism, different number of attention heads and granularity image features aligned by attention have an impact on the quality of multimodal machine translation. In order to address above problems, this paper proposes a multimodal machine translation enhancement by fusing multimodal-attention and fine-grained image features method which builds some submodels by introducing different granularity of image features to the multimodal-attention mechanism with different number of heads. Moreover, these sub-models are randomly fused and fusion models are obtained. The experimental results on the Multi30k dataset that the pruned attention heads lead to the improvement of translation results. Finally, our fusion model obtained the best results according to the automatic evaluation metrics BLEU compared with sub-models and some baselines.
融合多模态注意和细粒度图像特征的多模态机器翻译增强
近年来,随着多模态机器翻译(MMT)网络体系结构的发展,循环模型被注意机制有效地取代,并且在细粒度图像信息的辅助下,翻译结果得到了提高。虽然注意是一种强大而普遍的机制,但不同数量的注意头和由注意对齐的粒度图像特征会影响多模态机器翻译的质量。针对以上问题,本文提出了一种融合多模态注意和细粒度图像特征的多模态机器翻译增强方法,该方法通过将不同粒度的图像特征引入到不同头数的多模态注意机制中,构建一些子模型。然后对这些子模型进行随机融合,得到融合模型。在Multi30k数据集上的实验结果表明,注意头的修剪可以提高翻译结果。最后,与子模型和一些基线相比,我们的融合模型在自动评价指标BLEU上获得了最好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信