Peichao Lai, Feiyang Ye, Yanggeng Fu, Zhiwei Chen, Yingjie Wu, Yilei Wang
{"title":"M-Sim:低资源情境下中文简答评分的多层次语义推理模型","authors":"Peichao Lai, Feiyang Ye, Yanggeng Fu, Zhiwei Chen, Yingjie Wu, Yilei Wang","doi":"10.1016/j.csl.2023.101575","DOIUrl":null,"url":null,"abstract":"<div><p><span>Short answer scoring is a significant task in natural language processing<span>. On datasets comprising numerous explicit or implicit symbols and quantization entities, the existing approaches continue to perform poorly. Additionally, the majority of relevant datasets contain few-shot samples, reducing model efficacy in low-resource scenarios. To solve the above issues, we propose a Multi-level Semantic Inference Model (M-Sim), which obtains features at multiple scales to fully consider the explicit or implicit entity information contained in the data. We then design a prompt-based data augmentation to construct the simulated datasets, which effectively enhance model performance in low-resource scenarios. Our M-Sim outperforms the best competitor models by an average of 1.48 percent in the F1 score. The data augmentation significantly increases all approaches’ performance by an average of 0.036 in </span></span>correlation coefficient scores.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"M-Sim: Multi-level Semantic Inference Model for Chinese short answer scoring in low-resource scenarios\",\"authors\":\"Peichao Lai, Feiyang Ye, Yanggeng Fu, Zhiwei Chen, Yingjie Wu, Yilei Wang\",\"doi\":\"10.1016/j.csl.2023.101575\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>Short answer scoring is a significant task in natural language processing<span>. On datasets comprising numerous explicit or implicit symbols and quantization entities, the existing approaches continue to perform poorly. Additionally, the majority of relevant datasets contain few-shot samples, reducing model efficacy in low-resource scenarios. To solve the above issues, we propose a Multi-level Semantic Inference Model (M-Sim), which obtains features at multiple scales to fully consider the explicit or implicit entity information contained in the data. We then design a prompt-based data augmentation to construct the simulated datasets, which effectively enhance model performance in low-resource scenarios. Our M-Sim outperforms the best competitor models by an average of 1.48 percent in the F1 score. The data augmentation significantly increases all approaches’ performance by an average of 0.036 in </span></span>correlation coefficient scores.</p></div>\",\"PeriodicalId\":50638,\"journal\":{\"name\":\"Computer Speech and Language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Speech and Language\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0885230823000943\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230823000943","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
M-Sim: Multi-level Semantic Inference Model for Chinese short answer scoring in low-resource scenarios
Short answer scoring is a significant task in natural language processing. On datasets comprising numerous explicit or implicit symbols and quantization entities, the existing approaches continue to perform poorly. Additionally, the majority of relevant datasets contain few-shot samples, reducing model efficacy in low-resource scenarios. To solve the above issues, we propose a Multi-level Semantic Inference Model (M-Sim), which obtains features at multiple scales to fully consider the explicit or implicit entity information contained in the data. We then design a prompt-based data augmentation to construct the simulated datasets, which effectively enhance model performance in low-resource scenarios. Our M-Sim outperforms the best competitor models by an average of 1.48 percent in the F1 score. The data augmentation significantly increases all approaches’ performance by an average of 0.036 in correlation coefficient scores.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.