{"title":"向OpenAnswer的同行评估贝叶斯模型添加属性依赖关系","authors":"M. De Marsico, A. Sterbini, M. Temperini","doi":"10.1109/ITHET.2014.7155692","DOIUrl":null,"url":null,"abstract":"Peer-assessment can be used to evaluate the knowledge level achieved by learners, while exposing them to a significant leaning activity at the same time. Here we see an approach to semi-automatic grading of school works. It is based on peer-assessment of answers to open ended questions (“open answers”), supported by the teacher grading activity performed on a portion of the answers. The methodology we present is based on a representation of student model and answers through Bayesian networks. It supports grading in a six-values scale (the widely used “A” to “F” scale). The experiments we present test the possibility to model the fact that knowledge required to perform some work is preparatory to that required for a subsequent one. The experiments have been conducted using the web-based system OpenAnswer and were meant to collect datasets, to be exploited in order to evaluate the various algorithms, settings and termination conditions better to be applied in order to have a reliable set of grading out of the learners' peer-assessment process and the teacher's grading work (with the latter limited to a significantly limited percentage of the answers to be graded).","PeriodicalId":432693,"journal":{"name":"2014 Information Technology Based Higher Education and Training (ITHET)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Adding propedeuticy dependencies to the OpenAnswer Bayesian model of peer-assessment\",\"authors\":\"M. De Marsico, A. Sterbini, M. Temperini\",\"doi\":\"10.1109/ITHET.2014.7155692\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Peer-assessment can be used to evaluate the knowledge level achieved by learners, while exposing them to a significant leaning activity at the same time. Here we see an approach to semi-automatic grading of school works. It is based on peer-assessment of answers to open ended questions (“open answers”), supported by the teacher grading activity performed on a portion of the answers. The methodology we present is based on a representation of student model and answers through Bayesian networks. It supports grading in a six-values scale (the widely used “A” to “F” scale). The experiments we present test the possibility to model the fact that knowledge required to perform some work is preparatory to that required for a subsequent one. The experiments have been conducted using the web-based system OpenAnswer and were meant to collect datasets, to be exploited in order to evaluate the various algorithms, settings and termination conditions better to be applied in order to have a reliable set of grading out of the learners' peer-assessment process and the teacher's grading work (with the latter limited to a significantly limited percentage of the answers to be graded).\",\"PeriodicalId\":432693,\"journal\":{\"name\":\"2014 Information Technology Based Higher Education and Training (ITHET)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 Information Technology Based Higher Education and Training (ITHET)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITHET.2014.7155692\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 Information Technology Based Higher Education and Training (ITHET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITHET.2014.7155692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adding propedeuticy dependencies to the OpenAnswer Bayesian model of peer-assessment
Peer-assessment can be used to evaluate the knowledge level achieved by learners, while exposing them to a significant leaning activity at the same time. Here we see an approach to semi-automatic grading of school works. It is based on peer-assessment of answers to open ended questions (“open answers”), supported by the teacher grading activity performed on a portion of the answers. The methodology we present is based on a representation of student model and answers through Bayesian networks. It supports grading in a six-values scale (the widely used “A” to “F” scale). The experiments we present test the possibility to model the fact that knowledge required to perform some work is preparatory to that required for a subsequent one. The experiments have been conducted using the web-based system OpenAnswer and were meant to collect datasets, to be exploited in order to evaluate the various algorithms, settings and termination conditions better to be applied in order to have a reliable set of grading out of the learners' peer-assessment process and the teacher's grading work (with the latter limited to a significantly limited percentage of the answers to be graded).