{"title":"基于训练后BERT的文本相似度","authors":"Yongping Xing, Chaoyi Bian","doi":"10.1109/ICAA53760.2021.00058","DOIUrl":null,"url":null,"abstract":"Text similarity is an important taskin natural language processing. The pre-training BERT model which is get from through large-scale corpus traininghas achieved good results in various natural language processing tasks. However, domain knowledge is not introduced to the model. After the post-training through domain data, the bias of the model for the domain knowledge will be reduced, which improves performance in reading comprehension and emotional aspect extraction. In this paper, the domain knowledge is introduced through the post-training andthen text similarity is discussed.","PeriodicalId":121879,"journal":{"name":"2021 International Conference on Intelligent Computing, Automation and Applications (ICAA)","volume":"241 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Text Similarity Based on Post-training BERT\",\"authors\":\"Yongping Xing, Chaoyi Bian\",\"doi\":\"10.1109/ICAA53760.2021.00058\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Text similarity is an important taskin natural language processing. The pre-training BERT model which is get from through large-scale corpus traininghas achieved good results in various natural language processing tasks. However, domain knowledge is not introduced to the model. After the post-training through domain data, the bias of the model for the domain knowledge will be reduced, which improves performance in reading comprehension and emotional aspect extraction. In this paper, the domain knowledge is introduced through the post-training andthen text similarity is discussed.\",\"PeriodicalId\":121879,\"journal\":{\"name\":\"2021 International Conference on Intelligent Computing, Automation and Applications (ICAA)\",\"volume\":\"241 \",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Intelligent Computing, Automation and Applications (ICAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAA53760.2021.00058\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Intelligent Computing, Automation and Applications (ICAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAA53760.2021.00058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Text similarity is an important taskin natural language processing. The pre-training BERT model which is get from through large-scale corpus traininghas achieved good results in various natural language processing tasks. However, domain knowledge is not introduced to the model. After the post-training through domain data, the bias of the model for the domain knowledge will be reduced, which improves performance in reading comprehension and emotional aspect extraction. In this paper, the domain knowledge is introduced through the post-training andthen text similarity is discussed.