O. Ogunseiju, Nihar J. Gonsalves, A. Akanmu, Yewande Abraham, C. Nnaji
{"title":"在混合现实学习环境中从眼动追踪数据中自动检测学习阶段和互动难度","authors":"O. Ogunseiju, Nihar J. Gonsalves, A. Akanmu, Yewande Abraham, C. Nnaji","doi":"10.1108/sasbe-07-2022-0129","DOIUrl":null,"url":null,"abstract":"PurposeConstruction companies are increasingly adopting sensing technologies like laser scanners, making it necessary to upskill the future workforce in this area. However, limited jobsite access hinders experiential learning of laser scanning, necessitating the need for an alternative learning environment. Previously, the authors explored mixed reality (MR) as an alternative learning environment for laser scanning, but to promote seamless learning, such learning environments must be proactive and intelligent. Toward this, the potentials of classification models for detecting user difficulties and learning stages in the MR environment were investigated in this study.Design/methodology/approachThe study adopted machine learning classifiers on eye-tracking data and think-aloud data for detecting learning stages and interaction difficulties during the usability study of laser scanning in the MR environment.FindingsThe classification models demonstrated high performance, with neural network classifier showing superior performance (accuracy of 99.9%) during the detection of learning stages and an ensemble showing the highest accuracy of 84.6% for detecting interaction difficulty during laser scanning.Research limitations/implicationsThe findings of this study revealed that eye movement data possess significant information about learning stages and interaction difficulties and provide evidence of the potentials of smart MR environments for improved learning experiences in construction education. The research implication further lies in the potential of an intelligent learning environment for providing personalized learning experiences that often culminate in improved learning outcomes. This study further highlights the potential of such an intelligent learning environment in promoting inclusive learning, whereby students with different cognitive capabilities can experience learning tailored to their specific needs irrespective of their individual differences.Originality/valueThe classification models will help detect learners requiring additional support to acquire the necessary technical skills for deploying laser scanners in the construction industry and inform the specific training needs of users to enhance seamless interaction with the learning environment.","PeriodicalId":45779,"journal":{"name":"Smart and Sustainable Built Environment","volume":" ","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated detection of learning stages and interaction difficulty from eye-tracking data within a mixed reality learning environmen\",\"authors\":\"O. Ogunseiju, Nihar J. Gonsalves, A. Akanmu, Yewande Abraham, C. Nnaji\",\"doi\":\"10.1108/sasbe-07-2022-0129\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"PurposeConstruction companies are increasingly adopting sensing technologies like laser scanners, making it necessary to upskill the future workforce in this area. However, limited jobsite access hinders experiential learning of laser scanning, necessitating the need for an alternative learning environment. Previously, the authors explored mixed reality (MR) as an alternative learning environment for laser scanning, but to promote seamless learning, such learning environments must be proactive and intelligent. Toward this, the potentials of classification models for detecting user difficulties and learning stages in the MR environment were investigated in this study.Design/methodology/approachThe study adopted machine learning classifiers on eye-tracking data and think-aloud data for detecting learning stages and interaction difficulties during the usability study of laser scanning in the MR environment.FindingsThe classification models demonstrated high performance, with neural network classifier showing superior performance (accuracy of 99.9%) during the detection of learning stages and an ensemble showing the highest accuracy of 84.6% for detecting interaction difficulty during laser scanning.Research limitations/implicationsThe findings of this study revealed that eye movement data possess significant information about learning stages and interaction difficulties and provide evidence of the potentials of smart MR environments for improved learning experiences in construction education. The research implication further lies in the potential of an intelligent learning environment for providing personalized learning experiences that often culminate in improved learning outcomes. This study further highlights the potential of such an intelligent learning environment in promoting inclusive learning, whereby students with different cognitive capabilities can experience learning tailored to their specific needs irrespective of their individual differences.Originality/valueThe classification models will help detect learners requiring additional support to acquire the necessary technical skills for deploying laser scanners in the construction industry and inform the specific training needs of users to enhance seamless interaction with the learning environment.\",\"PeriodicalId\":45779,\"journal\":{\"name\":\"Smart and Sustainable Built Environment\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2023-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart and Sustainable Built Environment\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/sasbe-07-2022-0129\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"GREEN & SUSTAINABLE SCIENCE & TECHNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart and Sustainable Built Environment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/sasbe-07-2022-0129","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"GREEN & SUSTAINABLE SCIENCE & TECHNOLOGY","Score":null,"Total":0}
Automated detection of learning stages and interaction difficulty from eye-tracking data within a mixed reality learning environmen
PurposeConstruction companies are increasingly adopting sensing technologies like laser scanners, making it necessary to upskill the future workforce in this area. However, limited jobsite access hinders experiential learning of laser scanning, necessitating the need for an alternative learning environment. Previously, the authors explored mixed reality (MR) as an alternative learning environment for laser scanning, but to promote seamless learning, such learning environments must be proactive and intelligent. Toward this, the potentials of classification models for detecting user difficulties and learning stages in the MR environment were investigated in this study.Design/methodology/approachThe study adopted machine learning classifiers on eye-tracking data and think-aloud data for detecting learning stages and interaction difficulties during the usability study of laser scanning in the MR environment.FindingsThe classification models demonstrated high performance, with neural network classifier showing superior performance (accuracy of 99.9%) during the detection of learning stages and an ensemble showing the highest accuracy of 84.6% for detecting interaction difficulty during laser scanning.Research limitations/implicationsThe findings of this study revealed that eye movement data possess significant information about learning stages and interaction difficulties and provide evidence of the potentials of smart MR environments for improved learning experiences in construction education. The research implication further lies in the potential of an intelligent learning environment for providing personalized learning experiences that often culminate in improved learning outcomes. This study further highlights the potential of such an intelligent learning environment in promoting inclusive learning, whereby students with different cognitive capabilities can experience learning tailored to their specific needs irrespective of their individual differences.Originality/valueThe classification models will help detect learners requiring additional support to acquire the necessary technical skills for deploying laser scanners in the construction industry and inform the specific training needs of users to enhance seamless interaction with the learning environment.