{"title":"基于双层主题模型的多任务学习构建反应多维自动评分和解释","authors":"Jiawei Xiong, Feiming Li","doi":"10.1111/emip.12550","DOIUrl":null,"url":null,"abstract":"<p>Multidimensional scoring evaluates each constructed-response answer from more than one rating dimension and/or trait such as lexicon, organization, and supporting ideas instead of only one holistic score, to help students distinguish between various dimensions of writing quality. In this work, we present a bilevel learning model for combining two objectives, the multidimensional automated scoring, and the students’ writing structure analysis and interpretation. The dual objectives are enabled by a supervised model, called Latent Dirichlet Allocation Multitask Learning (LDAMTL), integrating a topic model and a multitask learning model with an attention mechanism. Two empirical data sets were employed to indicate LDAMTL model performance. On one hand, results suggested that LDAMTL owns better scoring and QW-<i>κ</i> values than two other competitor models, the supervised latent Dirichlet allocation, and Bidirectional Encoder Representations from Transformers at the 5% significance level. On the other hand, extracted topic structures revealed that students with a higher language score tended to employ more compelling words to support the argument in their answers. This study suggested that LDAMTL not only demonstrates the model performance by conjugating the underlying shared representation of each topic and learned representation from the neural networks but also helps understand students’ writing.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Bilevel Topic Model-Based Multitask Learning for Constructed-Responses Multidimensional Automated Scoring and Interpretation\",\"authors\":\"Jiawei Xiong, Feiming Li\",\"doi\":\"10.1111/emip.12550\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Multidimensional scoring evaluates each constructed-response answer from more than one rating dimension and/or trait such as lexicon, organization, and supporting ideas instead of only one holistic score, to help students distinguish between various dimensions of writing quality. In this work, we present a bilevel learning model for combining two objectives, the multidimensional automated scoring, and the students’ writing structure analysis and interpretation. The dual objectives are enabled by a supervised model, called Latent Dirichlet Allocation Multitask Learning (LDAMTL), integrating a topic model and a multitask learning model with an attention mechanism. Two empirical data sets were employed to indicate LDAMTL model performance. On one hand, results suggested that LDAMTL owns better scoring and QW-<i>κ</i> values than two other competitor models, the supervised latent Dirichlet allocation, and Bidirectional Encoder Representations from Transformers at the 5% significance level. On the other hand, extracted topic structures revealed that students with a higher language score tended to employ more compelling words to support the argument in their answers. This study suggested that LDAMTL not only demonstrates the model performance by conjugating the underlying shared representation of each topic and learned representation from the neural networks but also helps understand students’ writing.</p>\",\"PeriodicalId\":47345,\"journal\":{\"name\":\"Educational Measurement-Issues and Practice\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2023-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Educational Measurement-Issues and Practice\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/emip.12550\",\"RegionNum\":4,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Measurement-Issues and Practice","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/emip.12550","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Bilevel Topic Model-Based Multitask Learning for Constructed-Responses Multidimensional Automated Scoring and Interpretation
Multidimensional scoring evaluates each constructed-response answer from more than one rating dimension and/or trait such as lexicon, organization, and supporting ideas instead of only one holistic score, to help students distinguish between various dimensions of writing quality. In this work, we present a bilevel learning model for combining two objectives, the multidimensional automated scoring, and the students’ writing structure analysis and interpretation. The dual objectives are enabled by a supervised model, called Latent Dirichlet Allocation Multitask Learning (LDAMTL), integrating a topic model and a multitask learning model with an attention mechanism. Two empirical data sets were employed to indicate LDAMTL model performance. On one hand, results suggested that LDAMTL owns better scoring and QW-κ values than two other competitor models, the supervised latent Dirichlet allocation, and Bidirectional Encoder Representations from Transformers at the 5% significance level. On the other hand, extracted topic structures revealed that students with a higher language score tended to employ more compelling words to support the argument in their answers. This study suggested that LDAMTL not only demonstrates the model performance by conjugating the underlying shared representation of each topic and learned representation from the neural networks but also helps understand students’ writing.