{"title":"Student Performance Evaluation of Multimodal Learning via a Vector Space Model","authors":"Subhasree Basu, Yi Yu, Roger Zimmermann","doi":"10.1145/2661714.2661723","DOIUrl":null,"url":null,"abstract":"Multimodal learning, as an effective method to helping students understand complex concepts, has attracted much research interest recently. Our motivation of this work is very intuitive: we want to evaluate student performance of multimodal learning over the Internet. We are developing a system for student performance evaluation which can automatically collect student-generated multimedia data during online multimodal learning and analyze student performance. As our initial step, we propose to make use of a vector space model to process student-generated multimodal data, aiming at evaluating student performance by exploring all annotation information. In particular, the area of a study material is represented as a 2-dimensional grid and predefined attributes form an attribute space. Then, annotations generated by students are mapped to a 3-dimensional indicator matrix, 2-dimensions corresponding to object positions in the grid of the study material and a third dimension recording attributes of objects. Then, recall, precision and Jaccard index are used as metrics to evaluate student performance, given the teacher's analysis as the ground truth. We applied our scheme to real datasets generated by students and teachers in two schools. The results are encouraging and confirm the effectiveness of the proposed approach to student performance evaluation in multimodal learning.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"WISMM '14","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2661714.2661723","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Multimodal learning, as an effective method to helping students understand complex concepts, has attracted much research interest recently. Our motivation of this work is very intuitive: we want to evaluate student performance of multimodal learning over the Internet. We are developing a system for student performance evaluation which can automatically collect student-generated multimedia data during online multimodal learning and analyze student performance. As our initial step, we propose to make use of a vector space model to process student-generated multimodal data, aiming at evaluating student performance by exploring all annotation information. In particular, the area of a study material is represented as a 2-dimensional grid and predefined attributes form an attribute space. Then, annotations generated by students are mapped to a 3-dimensional indicator matrix, 2-dimensions corresponding to object positions in the grid of the study material and a third dimension recording attributes of objects. Then, recall, precision and Jaccard index are used as metrics to evaluate student performance, given the teacher's analysis as the ground truth. We applied our scheme to real datasets generated by students and teachers in two schools. The results are encouraging and confirm the effectiveness of the proposed approach to student performance evaluation in multimodal learning.