基于预训练的越南语选区分析实证研究

Tuan-Vi Tran, Xuan-Thien Pham, Duc-Vu Nguyen, Kiet Van Nguyen, N. Nguyen
{"title":"基于预训练的越南语选区分析实证研究","authors":"Tuan-Vi Tran, Xuan-Thien Pham, Duc-Vu Nguyen, Kiet Van Nguyen, N. Nguyen","doi":"10.1109/RIVF51545.2021.9642143","DOIUrl":null,"url":null,"abstract":"Constituency parsing is an important task that gets more attention in natural language processing. In this work, we use a span-based approach for Vietnamese constituency parsing. Our method follows the self-attention encoder architecture and a chart decoder using a CKY-style inference algorithm. We present analyses of the experiment results of the comparison of our empirical method using pre-training models XLM-R and PhoBERT on both Vietnamese datasets VietTreebank and NIIVTB1. The results show that our model with XLM-R archived the significantly F1-score better than other pre-training models, VietTreebank at 81.19% and NIIVTB1 at 85.70%.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"52 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"An Empirical Study for Vietnamese Constituency Parsing with Pre-training\",\"authors\":\"Tuan-Vi Tran, Xuan-Thien Pham, Duc-Vu Nguyen, Kiet Van Nguyen, N. Nguyen\",\"doi\":\"10.1109/RIVF51545.2021.9642143\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Constituency parsing is an important task that gets more attention in natural language processing. In this work, we use a span-based approach for Vietnamese constituency parsing. Our method follows the self-attention encoder architecture and a chart decoder using a CKY-style inference algorithm. We present analyses of the experiment results of the comparison of our empirical method using pre-training models XLM-R and PhoBERT on both Vietnamese datasets VietTreebank and NIIVTB1. The results show that our model with XLM-R archived the significantly F1-score better than other pre-training models, VietTreebank at 81.19% and NIIVTB1 at 85.70%.\",\"PeriodicalId\":6860,\"journal\":{\"name\":\"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)\",\"volume\":\"52 1\",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RIVF51545.2021.9642143\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RIVF51545.2021.9642143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

成分分析是自然语言处理中备受关注的一项重要任务。在这项工作中,我们使用基于跨度的方法进行越南语选区解析。我们的方法遵循自关注编码器架构和使用cky风格推理算法的图表解码器。我们在越南数据集VietTreebank和NIIVTB1上使用预训练模型XLM-R和PhoBERT对我们的经验方法的实验结果进行了比较分析。结果表明,我们的XLM-R模型的f1得分明显优于其他预训练模型,VietTreebank为81.19%,NIIVTB1为85.70%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Empirical Study for Vietnamese Constituency Parsing with Pre-training
Constituency parsing is an important task that gets more attention in natural language processing. In this work, we use a span-based approach for Vietnamese constituency parsing. Our method follows the self-attention encoder architecture and a chart decoder using a CKY-style inference algorithm. We present analyses of the experiment results of the comparison of our empirical method using pre-training models XLM-R and PhoBERT on both Vietnamese datasets VietTreebank and NIIVTB1. The results show that our model with XLM-R archived the significantly F1-score better than other pre-training models, VietTreebank at 81.19% and NIIVTB1 at 85.70%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信