{"title":"Student Engagement Recognition Using Multimodal Fusion Analytical Technology","authors":"Lijuan Yan, Jia-Hau Xiao, Xiaotao Wu, Xiaoyi Li","doi":"10.1109/CSTE55932.2022.00049","DOIUrl":null,"url":null,"abstract":"In the application of education, it is very necessary to evaluate student engagement, which is the premise of ensuring teaching quality and implementing teaching intervention. With the development of Internet of things and storage technology, multimodality data acquisition becomes more and more convenient. Considerable research has been devoted to utilizing multimodality data for better understanding student engagement. However, a core research issue has not yet been adequately addressed. Once a set of modalities has been identified, how do we fuse these modalities in an optimal way to perform student engagement analysis? In this paper, we propose a feature fusion framework based on learning process. There are two key steps in the framework, one is the extraction of unequal interval features, and the other is synchronous and asynchronous timing fusion. In addition, we carried out experimental research in real educational scenes, which using image data, log data and text data in online learning to detective different student engagement patterns. The experimental results show the effectiveness and applicability of the framework.","PeriodicalId":372816,"journal":{"name":"2022 4th International Conference on Computer Science and Technologies in Education (CSTE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Computer Science and Technologies in Education (CSTE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSTE55932.2022.00049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the application of education, it is very necessary to evaluate student engagement, which is the premise of ensuring teaching quality and implementing teaching intervention. With the development of Internet of things and storage technology, multimodality data acquisition becomes more and more convenient. Considerable research has been devoted to utilizing multimodality data for better understanding student engagement. However, a core research issue has not yet been adequately addressed. Once a set of modalities has been identified, how do we fuse these modalities in an optimal way to perform student engagement analysis? In this paper, we propose a feature fusion framework based on learning process. There are two key steps in the framework, one is the extraction of unequal interval features, and the other is synchronous and asynchronous timing fusion. In addition, we carried out experimental research in real educational scenes, which using image data, log data and text data in online learning to detective different student engagement patterns. The experimental results show the effectiveness and applicability of the framework.