{"title":"用于学生参与度自动评估的视觉转换器","authors":"Sandeep Mandia, Kuldeep Singh, R. Mitharwal","doi":"10.1109/IPAS55744.2022.10052945","DOIUrl":null,"url":null,"abstract":"Availability of the internet and quality of content attracted more learners to online platforms that are stimulated by COVID-19. Students of different cognitive capabilities join the learning process. However, it is challenging for the instructor to identify the level of comprehension of the individual learner, specifically when they waver in responding to feedback. The learner's facial expressions relate to content comprehension and engagement. This paper presents use of the vision transformer (ViT) to model automatic estimation of student engagement by learning the end-to-end features from facial images. The ViT architecture is used to enlarge the receptive field of the architecture by exploiting the multi-head attention operations. The model is trained using various loss functions to handle class imbalance. The ViT is evaluated on Dataset for Affective States in E-Environments (DAiSEE); it outperformed frame level baseline result by approximately 8% and the other two video level benchmarks by 8.78% and 2.78% achieving an overall accuracy of 55.18%. In addition, ViT with focal loss was also able to produce well distribution among classes except for one minority class.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vision Transformer for Automatic Student Engagement Estimation\",\"authors\":\"Sandeep Mandia, Kuldeep Singh, R. Mitharwal\",\"doi\":\"10.1109/IPAS55744.2022.10052945\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Availability of the internet and quality of content attracted more learners to online platforms that are stimulated by COVID-19. Students of different cognitive capabilities join the learning process. However, it is challenging for the instructor to identify the level of comprehension of the individual learner, specifically when they waver in responding to feedback. The learner's facial expressions relate to content comprehension and engagement. This paper presents use of the vision transformer (ViT) to model automatic estimation of student engagement by learning the end-to-end features from facial images. The ViT architecture is used to enlarge the receptive field of the architecture by exploiting the multi-head attention operations. The model is trained using various loss functions to handle class imbalance. The ViT is evaluated on Dataset for Affective States in E-Environments (DAiSEE); it outperformed frame level baseline result by approximately 8% and the other two video level benchmarks by 8.78% and 2.78% achieving an overall accuracy of 55.18%. In addition, ViT with focal loss was also able to produce well distribution among classes except for one minority class.\",\"PeriodicalId\":322228,\"journal\":{\"name\":\"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)\",\"volume\":\"138 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IPAS55744.2022.10052945\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPAS55744.2022.10052945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Vision Transformer for Automatic Student Engagement Estimation
Availability of the internet and quality of content attracted more learners to online platforms that are stimulated by COVID-19. Students of different cognitive capabilities join the learning process. However, it is challenging for the instructor to identify the level of comprehension of the individual learner, specifically when they waver in responding to feedback. The learner's facial expressions relate to content comprehension and engagement. This paper presents use of the vision transformer (ViT) to model automatic estimation of student engagement by learning the end-to-end features from facial images. The ViT architecture is used to enlarge the receptive field of the architecture by exploiting the multi-head attention operations. The model is trained using various loss functions to handle class imbalance. The ViT is evaluated on Dataset for Affective States in E-Environments (DAiSEE); it outperformed frame level baseline result by approximately 8% and the other two video level benchmarks by 8.78% and 2.78% achieving an overall accuracy of 55.18%. In addition, ViT with focal loss was also able to produce well distribution among classes except for one minority class.