Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education最新文献

筛选
英文 中文
Developing a pedagogical framework for designing a multisensory serious gaming environment 开发设计多感官严肃游戏环境的教学框架
S. Price, S. Duffy, M. Gori
{"title":"Developing a pedagogical framework for designing a multisensory serious gaming environment","authors":"S. Price, S. Duffy, M. Gori","doi":"10.1145/3139513.3139517","DOIUrl":"https://doi.org/10.1145/3139513.3139517","url":null,"abstract":"The importance of multisensory interaction for learning has increased with improved understanding of children’s sensory development, and a flourishing interest in embodied cognition. The potential to foster new forms of multisensory interaction through various sensor, mobile and haptic technologies is promising in providing new ways for young children to engage with key mathematical concepts. However, designing effective learning environments for real world classrooms is challenging, and requires a pedagogically, rather than technologically, driven approach to design. This paper describes initial work underpinning the development of a pedagogical framework, intended to inform the design of a multisensory serious gaming environment. It identifies the theoretical basis of the framework, illustrates how this informs teaching strategies, and outlines key technology research driven perspectives and considerations important for informing design. An initial table mapping mathematical concepts to design, a framework of considerations for design, and a process model of how the framework will continue to be developed across the design process are provided.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125470606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Automatic generation of actionable feedback towards improving social competency in job interviews 自动生成可操作的反馈,以提高工作面试中的社会能力
S. Nambiar, Rahul Das, Sowmya Rasipuram, D. Jayagopi
{"title":"Automatic generation of actionable feedback towards improving social competency in job interviews","authors":"S. Nambiar, Rahul Das, Sowmya Rasipuram, D. Jayagopi","doi":"10.1145/3139513.3139515","DOIUrl":"https://doi.org/10.1145/3139513.3139515","url":null,"abstract":"Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation. In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly. Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130168703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluation of audio-based feedback technologies for bow learning technique in violin beginners 基于音频的反馈技术对小提琴初学者弓弦学习技术的评价
A. Blanco, R. Ramírez
{"title":"Evaluation of audio-based feedback technologies for bow learning technique in violin beginners","authors":"A. Blanco, R. Ramírez","doi":"10.1145/3139513.3139520","DOIUrl":"https://doi.org/10.1145/3139513.3139520","url":null,"abstract":"We present a study of the effects of feedback technologies on the learning process of novice violin students. Twenty-one subjects participated in our experiment, divided into two groups: Beginners (participants with no prior violin playing experience, N=14), and experts (participants with more than 6 years of violin playing experience, N=7). The beginners group was further divided into two: a group of beginners learning with Youtube videos (N=7), and a group of beginners with additional feedback related to the quality of their performance (N=7). Participants were asked to perform a violin exercise during 21 trials while their audio was recorded and analyzed. Three different audio descriptors were extracted from each audio in order to evaluate the quality of the performance: Dynamic stability, pitch stability and aperiodicity. Beginners showed a significant improvement during the session(i.e. by comparing the beginning and the end of the session)in the quality of the sound recorded, while experts maintained their results. However, only the beginner group with feedback showed significant improvement between the middle and late part of the session, while the group without feedback remained stable.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129825299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Angle discrimination by walking in children 孩子走路时的角度辨别
L. Cuturi, G. Cappagli, M. Gori
{"title":"Angle discrimination by walking in children","authors":"L. Cuturi, G. Cappagli, M. Gori","doi":"10.1145/3139513.3139516","DOIUrl":"https://doi.org/10.1145/3139513.3139516","url":null,"abstract":"In primary school, children tend to have difficulties in discriminating angles of different degrees and categorizing them either as acute or obtuse, especially at the first stages of development (6-7 y.o.). In the context of a novel approach that intends to use other sensory modalities than visual to teach geometrical concepts, we ran a psychophysical study investigating angle perception by spatially navigating in space. Our results show that the youngest group of children tend to be more imprecise when asked to discriminate the walking angle of 90°, pivotal to learn how to differentiate between acute and obtuse angles. These results are then discussed in terms of the development of novel technological solutions aimed to integrate locomotion in the teaching of geometrical concepts.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132289486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Differences of online learning behaviors and eye-movement between students having different personality traits 不同人格特质学生网络学习行为和眼动的差异
Bo Sun, Song Lai, Congcong Xu, Rong Xiao, Yungang Wei, Yongkang Xiao
{"title":"Differences of online learning behaviors and eye-movement between students having different personality traits","authors":"Bo Sun, Song Lai, Congcong Xu, Rong Xiao, Yungang Wei, Yongkang Xiao","doi":"10.1145/3139513.3139527","DOIUrl":"https://doi.org/10.1145/3139513.3139527","url":null,"abstract":"The information technologies are integrated into education so that mass data is available reflecting each action of students in online environments. Numerous studies have exploited these data to do the learning analytics.In this paper, we aim at achieving the show of personalized indicators for students per personality trait on the learning analytics dashboard (LAD) and present the preliminary results. First, we employ learning behavior engagement (LBE) to describe students' learning behaviors, exploited to analyze the significant differences among students having different personality traits. In experiments, fifteen behavioral indicators are tested. The experimental results show that there are significant differences about some behavioral indicators among personality traits. Second, some of these behavioral indicators are presented on the LAD and distributed in each area of interest (AOI). Hence, students can visualize their behavioral data that they care about in AOIs anytime in the learning process. Through the analysis of eye-movement including the fixation duration, fixation count, heat map and track map, we have found that there are significant differences about some visual indicators in AOIs. This is partly consistent with the results of behavioral indicators.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116050540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education 第一届ACM SIGCHI教育多模式互动国际研讨会论文集
G. Volpe, M. Gori, N. Bianchi-Berthouze, G. Baud-Bovy, Paolo Alborno, Erica Volta
{"title":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","authors":"G. Volpe, M. Gori, N. Bianchi-Berthouze, G. Baud-Bovy, Paolo Alborno, Erica Volta","doi":"10.1145/3139513","DOIUrl":"https://doi.org/10.1145/3139513","url":null,"abstract":"","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131315851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A multimodal LEGO®-based learning activity mixing musical notation and computer programming 一个多模态乐高®为基础的学习活动,混合音乐符号和计算机编程
L. A. Ludovico, D. Malchiodi, L. Zecca
{"title":"A multimodal LEGO®-based learning activity mixing musical notation and computer programming","authors":"L. A. Ludovico, D. Malchiodi, L. Zecca","doi":"10.1145/3139513.3139519","DOIUrl":"https://doi.org/10.1145/3139513.3139519","url":null,"abstract":"This paper discusses a multimodal learning activity based on LEGO® bricks where elements from the domains of music and informatics are mixed. Such an experience addresses children in preschool age and students of the primary schools in order to convey some basic aspects of computational thinking. The learning methodology is organized in two phases where construction blocks are employed as a physical tool and as a metaphor for music notation, respectively. The goal is to foster in young students abilities such as analysis and re-synthesis, problem solving, abstraction and adaptive reasoning. A web application to support this approach and to provide a prompt feedback to user action is under development, and its design principles and key characteristics will be presented.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122239452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
What cognitive and affective states should technology monitor to support learning? 技术应该监测哪些认知和情感状态来支持学习?
Temitayo A. Olugbade, L. Cuturi, G. Cappagli, Erica Volta, Paolo Alborno, Joseph W. Newbold, N. Bianchi-Berthouze, G. Baud-Bovy, G. Volpe, M. Gori
{"title":"What cognitive and affective states should technology monitor to support learning?","authors":"Temitayo A. Olugbade, L. Cuturi, G. Cappagli, Erica Volta, Paolo Alborno, Joseph W. Newbold, N. Bianchi-Berthouze, G. Baud-Bovy, G. Volpe, M. Gori","doi":"10.1145/3139513.3139522","DOIUrl":"https://doi.org/10.1145/3139513.3139522","url":null,"abstract":"This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"08 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130398691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Predicting student engagement in classrooms using facial behavioral cues 利用面部行为线索预测学生课堂参与度
Chinchu Thomas, D. Jayagopi
{"title":"Predicting student engagement in classrooms using facial behavioral cues","authors":"Chinchu Thomas, D. Jayagopi","doi":"10.1145/3139513.3139514","DOIUrl":"https://doi.org/10.1145/3139513.3139514","url":null,"abstract":"Student engagement is the key to successful classroom learning. Measuring or analyzing the engagement of students is very important to improve learning as well as teaching. In this work, we analyze the engagement or attention level of the students from their facial expressions, headpose and eye gaze using computer vision techniques and a decision is taken using machine learning algorithms. Since the human observers are able to well distinguish the attention level from student’s facial expressions,head pose and eye gaze, we assume that machine will also be able to learn the behavior automatically. The engagement level is analyzed on 10 second video clips. The performance of the algorithm is better than the baseline results. Our best accuracy results are 10 % better than the baseline. The paper also gives a detailed review of works related to the analysis of student engagement in a classroom using vision based techniques.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127576369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
Bowing modeling for violin students assistance 琴弓造型对小提琴学生的帮助
F. Ortega, Sergio I. Giraldo, R. Ramírez
{"title":"Bowing modeling for violin students assistance","authors":"F. Ortega, Sergio I. Giraldo, R. Ramírez","doi":"10.1145/3139513.3139525","DOIUrl":"https://doi.org/10.1145/3139513.3139525","url":null,"abstract":"Though musicians tend to agree on the importance of practicing expressivity in performance, not many tools and techniques are available for the task. A machine learning model is proposed for predicting bowing velocity during performances of violin pieces. Our aim is to provide feedback to violin students in a technology--enhanced learning setting. Predictions are generated for musical phrases in a score by matching them to melodically and rhythmically similar phrases in performances by experts and adapting the bow velocity curve measured in the experts' performance. Results show that mean error in velocity predictions and bowing direction classification accuracy outperform our baseline when reference phrases similar to the predicted ones are available.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131300319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信