{"title":"解释基于变压器的自动简答评分模型","authors":"Andrew Poulton, Sebas Eliens","doi":"10.1145/3488466.3488479","DOIUrl":null,"url":null,"abstract":"Over recent years, advances in natural language processing have brought ever more advanced and expressive language models to the world. With open-source implementations and model registries, these state-of-the-art models are freely available to anyone, and the successful application of transfer learning has meant benchmarks on previously difficult tasks can be beaten with relative ease. In this regard, Automatic Short Answer Grading (ASAG) is no different. Unfortunately, an infallible ASAG system is beyond the reach of current models, and so there is an onus on any ASAG implementation to keep a human in the loop to ensure answers are being accurately graded. To assist the humans in the loop, one may apply various explainability methods to a model prediction to give clues as to why the model came to its conclusion. However, amongst the many available models and explainability techniques, which ones provide the best accuracy and most intuitive explanations? This work proposes a framework by which this decision can be made, and assesses several popular transformer-based models with various explainability methods on the widely used benchmark dataset from Semeval-2013.","PeriodicalId":196340,"journal":{"name":"Proceedings of the 5th International Conference on Digital Technology in Education","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Explaining transformer-based models for automatic short answer grading\",\"authors\":\"Andrew Poulton, Sebas Eliens\",\"doi\":\"10.1145/3488466.3488479\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Over recent years, advances in natural language processing have brought ever more advanced and expressive language models to the world. With open-source implementations and model registries, these state-of-the-art models are freely available to anyone, and the successful application of transfer learning has meant benchmarks on previously difficult tasks can be beaten with relative ease. In this regard, Automatic Short Answer Grading (ASAG) is no different. Unfortunately, an infallible ASAG system is beyond the reach of current models, and so there is an onus on any ASAG implementation to keep a human in the loop to ensure answers are being accurately graded. To assist the humans in the loop, one may apply various explainability methods to a model prediction to give clues as to why the model came to its conclusion. However, amongst the many available models and explainability techniques, which ones provide the best accuracy and most intuitive explanations? This work proposes a framework by which this decision can be made, and assesses several popular transformer-based models with various explainability methods on the widely used benchmark dataset from Semeval-2013.\",\"PeriodicalId\":196340,\"journal\":{\"name\":\"Proceedings of the 5th International Conference on Digital Technology in Education\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th International Conference on Digital Technology in Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3488466.3488479\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Digital Technology in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3488466.3488479","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explaining transformer-based models for automatic short answer grading
Over recent years, advances in natural language processing have brought ever more advanced and expressive language models to the world. With open-source implementations and model registries, these state-of-the-art models are freely available to anyone, and the successful application of transfer learning has meant benchmarks on previously difficult tasks can be beaten with relative ease. In this regard, Automatic Short Answer Grading (ASAG) is no different. Unfortunately, an infallible ASAG system is beyond the reach of current models, and so there is an onus on any ASAG implementation to keep a human in the loop to ensure answers are being accurately graded. To assist the humans in the loop, one may apply various explainability methods to a model prediction to give clues as to why the model came to its conclusion. However, amongst the many available models and explainability techniques, which ones provide the best accuracy and most intuitive explanations? This work proposes a framework by which this decision can be made, and assesses several popular transformer-based models with various explainability methods on the widely used benchmark dataset from Semeval-2013.