Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge最新文献

筛选
英文 中文
Multimodal Learning Analytics as a Tool for Bridging Learning Theory and Complex Learning Behaviors 多模态学习分析:连接学习理论和复杂学习行为的工具
M. Worsley
{"title":"Multimodal Learning Analytics as a Tool for Bridging Learning Theory and Complex Learning Behaviors","authors":"M. Worsley","doi":"10.1145/2666633.2666634","DOIUrl":"https://doi.org/10.1145/2666633.2666634","url":null,"abstract":"The recent emergence of several low-cost, high resolution, multimodal sensors has greatly facilitated the ability for researchers to capture a wealth of data across a variety of contexts. Over the past few years, this multimodal technology has begun to receive greater attention within the learning community. Specifically, the Multimodal Learning Analytics community has been capitalizing on new sensor technology, as well as the expansion of tools for supporting computational analysis, in order to better understand and improve student learning in complex learning environments. However, even as the data collection and analysis tools have greatly eased the process, there remain a number of considerations and challenges in framing research in such a way that it lends to the development of learning theory. Moreover, there are a multitude of approaches that can be used for integrating multimodal data, and each approach has different assumptions and implications. In this paper, I describe three different types of multimodal analyses, and discuss how decisions about data integration and fusion have a significant impact on how the research relates to learning theories.","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128501077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Presentation Skills Estimation Based on Video and Kinect Data Analysis 基于视频和Kinect数据分析的演讲技巧评估
Vanessa Echeverría, Allan Avendaño, K. Chiluiza, Aníbal Vásquez, X. Ochoa
{"title":"Presentation Skills Estimation Based on Video and Kinect Data Analysis","authors":"Vanessa Echeverría, Allan Avendaño, K. Chiluiza, Aníbal Vásquez, X. Ochoa","doi":"10.1145/2666633.2666641","DOIUrl":"https://doi.org/10.1145/2666633.2666641","url":null,"abstract":"This paper identifies, by means of video and Kinect data, a set of predictors that estimate the presentation skills of 448 individual students. Two evaluation criteria were predicted: eye contact and posture and body language. Machine-learning evaluations resulted in models that predicted the performance level (good or poor) of the presenters with 68% and 63% of correctly classified instances, for eye contact and postures and body language criteria, respectively. Furthermore, the results suggest that certain features, such as arms movement and smoothness, provide high significance on predicting the level of development for presentation skills. The paper finishes with conclusions and related ideas for future work.","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130312961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Combining empirical and machine learning techniques to predict math expertise using pen signal features 结合经验和机器学习技术,使用笔信号特征预测数学专业知识
Jianlong Zhou, Kevin Hang, S. Oviatt, Kun Yu, Fang Chen
{"title":"Combining empirical and machine learning techniques to predict math expertise using pen signal features","authors":"Jianlong Zhou, Kevin Hang, S. Oviatt, Kun Yu, Fang Chen","doi":"10.1145/2666633.2666638","DOIUrl":"https://doi.org/10.1145/2666633.2666638","url":null,"abstract":"Multimodal learning analytics aims to automatically analyze students' natural communication patterns based on speech, writing, and other modalities during learning activities. This research used the Math Data Corpus, which contains time-synchronized multimodal data from collaborating students as they jointly solved problems varying in difficulty. The aim was to investigate how reliably pen signal features, which were extracted as students wrote with digital pens and paper, could identify which student in a group was the dominant domain expert. An additional aim was to improve prediction of expertise based on joint bootstrapping of empirical science and machine learning techniques. To accomplish this, empirical analyses first identified which data partitioning and pen signal features were most reliably associated with expertise. Then alternative machine learning techniques compared classification accuracies based on all pen features, versus empirically selected ones. The best unguided classification accuracy was 70.8%, which improved to 83.3% with empirical guidance. These results demonstrate that handwriting signal features can predict domain expertise in math with high reliability. Hybrid methods also can outperform black-box machine learning in both accuracy and transparency.","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134186937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge 2014年ACM多模式学习分析研讨会和大挑战研讨会论文集
X. Ochoa, M. Worsley, K. Chiluiza, S. Luz
{"title":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","authors":"X. Ochoa, M. Worsley, K. Chiluiza, S. Luz","doi":"10.1145/2666633","DOIUrl":"https://doi.org/10.1145/2666633","url":null,"abstract":"Learning Analytics is the \"middle-space\" where Educational Sciences, Computer Science, Learning Technologies and Data Science converge. The main goal of this new field of knowledge is to contribute to new empirical findings, theories, methods, and metrics for understanding how students learn and to use that knowledge to improve those students' learning. Multimodal Learning Analytics, which emphasizes the analysis of natural rich modalities of communication during situated learning activities, is one of the most challenging but, at time, more promising areas of Learning Analytics. The Third International Workshop on Multimodal Learning Analytics brings together researchers in multimodal interaction and systems, cognitive and learning sciences, educational technologies, and related areas to discuss the recent developments and future opportunities in this sub-field. \u0000 \u0000Following the First International Workshop on Multimodal Learning Analytics in Santa Monica in 2012 and the ICMI Grand Challenge on Multimodal Learning Analytics in Sydney in 2013, this third workshop comprises a mixture of a workshop session and two data-driven grand challenges. The program committee reviewed and accepted the following articles. \u0000 \u0000The workshop session focuses on the presentation of multimodal signal analysis techniques that could be applied in Multimodal Learning Analytics. In this workshop challenges presenters concentrate on the benefits and shortcomings of different research and technical methods used for multimodal analysis of learning signals. This session includes four articles from diverse topics: theoretical and conceptual considerations for different forms of multimodal data fusion; voice analysis to determine the level of rapport in learning exercises; video analysis of live classrooms; and the role of multimodal analysis in the service of studying complex learning environments. \u0000 \u0000Following the successful experience of the previous Multimodal Learning Analytics Grand Challenge in ICMI 2013, this year, this event will provide two data sets with a wealth of research questions to be tackled by interested participants: Math Data Challenge and Presentation Quality Challenge. For the Math Data Challenge, one article presented in this session provides a detailed exploration of how to use the digital pen information to predict the expertise in the group. This work reaches high levels of accuracy (83%) when identifying the expert student among the participants. For the Presentation Quality Challenge three articles are presented. The first one explores the slide presentation files and the audio features to predict the grade obtained by each student. The second work makes use of all the provided modalities (audio, video, Kinect data and slide files) and suggests that multimodal cues can predict human scores on presentation tasks. The final article uses the video and Kinect information to predict human grading. \u0000 \u0000The third Multimodal Learning Analytics Workshop and ","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123810460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deciphering the Practices and Affordances of Different Reasoning Strategies through Multimodal Learning Analytics 通过多模态学习分析解读不同推理策略的实践和启示
M. Worsley, Paulo Blikstein
{"title":"Deciphering the Practices and Affordances of Different Reasoning Strategies through Multimodal Learning Analytics","authors":"M. Worsley, Paulo Blikstein","doi":"10.1145/2666633.2666637","DOIUrl":"https://doi.org/10.1145/2666633.2666637","url":null,"abstract":"Multimodal analysis has had demonstrated effectiveness in studying and modeling several human-human and human-computer interactions. In this paper, we explore the role of multimodal analysis in the service of studying complex learning environments. We use a semi-automated multimodal method to examine how students learn in a hands-on, engineering design context. Specifically, we combine, audio, gesture and electro-dermal activation data from a study (N=20) in which students were divided into two experimental conditions. The two experimental conditions, example-based reasoning and principle-based reasoning, have previously been shown to be associated with different learning gains and different levels of design quality. In this paper we study how the two experimental conditions differed in terms of their practices and processes. The practices included four common multimodal behaviors, that we've entitled ACTION, TALK, STRESS and FLOW. Furthermore, we show that individuals from the two experimental conditions differed in their usage of the four common behavior both on aggregate, and when we model their sequence of actions. Details concerning the data, analytic technique, interpretation and implications of this research are discussed.","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124097288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Estimation of Presentations Skills Based on Slides and Audio Features 基于幻灯片和音频特征的演讲技巧评估
Gonzalo Luzardo, B. Guamán, K. Chiluiza, Jaime Castells, X. Ochoa
{"title":"Estimation of Presentations Skills Based on Slides and Audio Features","authors":"Gonzalo Luzardo, B. Guamán, K. Chiluiza, Jaime Castells, X. Ochoa","doi":"10.1145/2666633.2666639","DOIUrl":"https://doi.org/10.1145/2666633.2666639","url":null,"abstract":"This paper proposes a simple estimation of the quality of student oral presentations. It is based on the study and analysis of features extracted from the audio and digital slides of 448 presentations. The main goal of this work is to automatically predict the values assigned by professors to different criteria in a presentation evaluation rubric. Machine Learning methods were used to create several models that classify students in two clusters: high and low performers. The models created from slide features were accurate up to 65%. The most relevant features for the slide-base models were: number of words, images, and tables, and the maximum font size. The audio-based models reached up to 69% of accuracy, with pitch and filled pauses related features being the most significant. The relatively high degrees of accuracy obtained with these very simple features encourage the development of automatic estimation tools for improving presentation skills.","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132536875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Session details: Math Data Corpus Challenge 会议详情:数学数据语料库挑战
K. Chiluiza
{"title":"Session details: Math Data Corpus Challenge","authors":"K. Chiluiza","doi":"10.1145/3255190","DOIUrl":"https://doi.org/10.1145/3255190","url":null,"abstract":"","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116640958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Oral Presentation Quality Challenge 会议细节:口头陈述质量挑战
M. Worsley
{"title":"Session details: Oral Presentation Quality Challenge","authors":"M. Worsley","doi":"10.1145/3255191","DOIUrl":"https://doi.org/10.1145/3255191","url":null,"abstract":"","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130234974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acoustic-Prosodic Entrainment and Rapport in Collaborative Learning Dialogues 合作学习对话中的声韵律娱乐与和谐
Nichola Lubold, Heather Pon-Barry
{"title":"Acoustic-Prosodic Entrainment and Rapport in Collaborative Learning Dialogues","authors":"Nichola Lubold, Heather Pon-Barry","doi":"10.1145/2666633.2666635","DOIUrl":"https://doi.org/10.1145/2666633.2666635","url":null,"abstract":"In spoken dialogue analysis, the speech signal is a rich source of information. We explore in this paper how low level features of the speech signal, such as pitch, loudness, and speaking rate, can inform a model of student interaction in collaborative learning dialogues. For instance, can we observe the way that two people's manners of speaking change over time to model something like rapport? By detecting interaction qualities such as rapport, we can better support collaborative interactions, which have been shown to be highly conducive to learning. For this, we focus on one particular phenomenon of spoken conversation, known as acoustic-prosodic entrainment, where dialogue partners become more similar to each other in their pitch, loudness, or speaking rate during the course of a conversation. We examine whether acoustic-prosodic entrainment is present in a novel corpus of collaborative learning dialogues, how people appear to entrain, to what degree, and report on the acoustic-prosodic features which people entrain on the most. We then investigate whether entrainment can facilitate detection of rapport, a social quality of the interaction. We find that entrainment does correlate to rapport; speakers appear to entrain primarily by matching their prosody on a turn-by-turn basis, and pitch is the most significant acoustic-prosodic feature people entrain on when rapport is present.","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115631937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Using Multimodal Cues to Analyze MLA'14 Oral Presentation Quality Corpus: Presentation Delivery and Slides Quality 使用多模态线索分析MLA'14口头报告质量语料库:报告传递和幻灯片质量
L. Chen, C. W. Leong, G. Feng, Chong Min Lee
{"title":"Using Multimodal Cues to Analyze MLA'14 Oral Presentation Quality Corpus: Presentation Delivery and Slides Quality","authors":"L. Chen, C. W. Leong, G. Feng, Chong Min Lee","doi":"10.1145/2666633.2666640","DOIUrl":"https://doi.org/10.1145/2666633.2666640","url":null,"abstract":"The ability of making presentation slides and delivering them effectively to convey information to the audience is a task of increasing importance, particularly in the pursuit of both academic and professional career success. We envision that multimodal sensing and machine learning techniques can be employed to evaluate, and potentially help to improve the quality of the content and delivery of public presentations. To this end, we report a study using the Oral Presentation Quality Corpus provided by the 2014 Multimodal Learning Analytics (MLA) Grand Challenge. A set of multimodal features were extracted from slides, speech, posture and hand gestures, as well as head poses. We also examined the dimensionality of the human scores, which could be concisely represented by two Principal Component (PC) scores, comp1 for delivery skills and comp2 for slides quality. Several machine learning experiments were performed to predict the two PC scores using multimodal features. Our experiments suggest that multimodal cues can predict human scores on presentation tasks, and a scoring model comprising both verbal and visual features can outperform that using just a single modality.","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125561281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信