Linxuan Zhao, Lixiang Yan, D. Gašević, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Riordan Alfredo, Xinyu Li, Roberto Martínez Maldonado
{"title":"医疗保健仿真中基于语音检测和定位数据的协同位置团队通信建模","authors":"Linxuan Zhao, Lixiang Yan, D. Gašević, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Riordan Alfredo, Xinyu Li, Roberto Martínez Maldonado","doi":"10.1145/3506860.3506935","DOIUrl":null,"url":null,"abstract":"In co-located situations, team members use a combination of verbal and visual signals to communicate effectively, among which positional forms play a key role. The spatial patterns adopted by team members in terms of where in the physical space they are standing, and who their body is oriented to, can be key in analysing and increasing the quality of interaction during such face-to-face situations. In this paper, we model the students’ communication based on spatial (positioning) and audio (voice detection) data captured from 92 students working in teams of four in the context of healthcare simulation. We extract non-verbal events (i.e., total speaking time, overlapped speech,and speech responses to team members and teachers) and investigate to what extent they can serve as meaningful indicators of students’ performance according to teachers’ learning intentions. The contribution of this paper to multimodal learning analytics includes: i) a generic method to semi-automatically model communication in a setting where students can freely move in the learning space; and ii) results from a mixed-methods analysis of non-verbal indicators of team communication with respect to teachers’ learning design.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Modelling Co-located Team Communication from Voice Detection and Positioning Data in Healthcare Simulation\",\"authors\":\"Linxuan Zhao, Lixiang Yan, D. Gašević, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Riordan Alfredo, Xinyu Li, Roberto Martínez Maldonado\",\"doi\":\"10.1145/3506860.3506935\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In co-located situations, team members use a combination of verbal and visual signals to communicate effectively, among which positional forms play a key role. The spatial patterns adopted by team members in terms of where in the physical space they are standing, and who their body is oriented to, can be key in analysing and increasing the quality of interaction during such face-to-face situations. In this paper, we model the students’ communication based on spatial (positioning) and audio (voice detection) data captured from 92 students working in teams of four in the context of healthcare simulation. We extract non-verbal events (i.e., total speaking time, overlapped speech,and speech responses to team members and teachers) and investigate to what extent they can serve as meaningful indicators of students’ performance according to teachers’ learning intentions. The contribution of this paper to multimodal learning analytics includes: i) a generic method to semi-automatically model communication in a setting where students can freely move in the learning space; and ii) results from a mixed-methods analysis of non-verbal indicators of team communication with respect to teachers’ learning design.\",\"PeriodicalId\":185465,\"journal\":{\"name\":\"LAK22: 12th International Learning Analytics and Knowledge Conference\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"LAK22: 12th International Learning Analytics and Knowledge Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3506860.3506935\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"LAK22: 12th International Learning Analytics and Knowledge Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3506860.3506935","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Modelling Co-located Team Communication from Voice Detection and Positioning Data in Healthcare Simulation
In co-located situations, team members use a combination of verbal and visual signals to communicate effectively, among which positional forms play a key role. The spatial patterns adopted by team members in terms of where in the physical space they are standing, and who their body is oriented to, can be key in analysing and increasing the quality of interaction during such face-to-face situations. In this paper, we model the students’ communication based on spatial (positioning) and audio (voice detection) data captured from 92 students working in teams of four in the context of healthcare simulation. We extract non-verbal events (i.e., total speaking time, overlapped speech,and speech responses to team members and teachers) and investigate to what extent they can serve as meaningful indicators of students’ performance according to teachers’ learning intentions. The contribution of this paper to multimodal learning analytics includes: i) a generic method to semi-automatically model communication in a setting where students can freely move in the learning space; and ii) results from a mixed-methods analysis of non-verbal indicators of team communication with respect to teachers’ learning design.