Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data最新文献

筛选
英文 中文
Predicting group satisfaction in meeting discussions 预测小组在会议讨论中的满意度
Catherine Lai, Gabriel Murray
{"title":"Predicting group satisfaction in meeting discussions","authors":"Catherine Lai, Gabriel Murray","doi":"10.1145/3279810.3279840","DOIUrl":"https://doi.org/10.1145/3279810.3279840","url":null,"abstract":"We address the task of automatically predicting group satisfaction in meetings using acoustic, lexical, and turn-taking features. Participant satisfaction is measured using post-meeting ratings from the AMI corpus. We focus on predicting three aspects of satisfaction: overall satisfaction, participant attention satisfaction, and information overload. All predictions are made at the aggregated group level. In general, we find that combining features across modalities improves prediction performance. However, feature ablation significantly improves performance. Our experiments also show how data-driven methods can be used to explore how different facets of group satisfaction are expressed through different modalities. For example, inclusion of prosodic features improves prediction of attention satisfaction but hinders prediction of overall satisfaction, but the opposite for lexical features. Moreover, feelings of sufficient attention were better reflected by acoustic features than by speaking time, while information overload was better reflected by specific lexical cues and turn-taking patterns. Overall, this study indicates that group affect can be revealed as much by how participants speak, as by what they say.","PeriodicalId":326513,"journal":{"name":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Integrating non-invasive neuroimaging and computer log data to improve understanding of cognitive processes 整合非侵入性神经成像和计算机日志数据,以提高对认知过程的理解
Leah Friedman, Ruixue Liu, Erin Walker, E. Solovey
{"title":"Integrating non-invasive neuroimaging and computer log data to improve understanding of cognitive processes","authors":"Leah Friedman, Ruixue Liu, Erin Walker, E. Solovey","doi":"10.1145/3279810.3279854","DOIUrl":"https://doi.org/10.1145/3279810.3279854","url":null,"abstract":"As non-invasive neuroimaging techniques become less expensive and more portable, we have the capability to monitor brain activity during various computer activities. This provides an opportunity to integrate brain data with computer log data to develop models of cognitive processes. These models can be used to continually assess an individual's changing cognitive state and develop adaptive human-computer interfaces. As a step in this direction, we have conducted a study using functional near-infrared spectroscopy (fNIRS) during the Sustained Attention to Response Task (SART) paradigm, which has been used in prior work to elicit mind wandering and to explore response inhibition. The goal with this is to determine whether fNIRS data can be used as a predictor of errors on the task. This would have implications for detecting similar cognitive processes in more realistic tasks, such as using a personal learning environment. Additionally, this study aims to test individual differences by correlating objective behavioral data and subjective self reports with activity in the medial prefrontal cortex (mPFC), associated with the brain's default mode network (DMN). We observed significant differences in the mPFC between periods prior to task error and periods prior to a correct response. These differences were particularly apparent amongst those individuals who performed poorly on the SART task and those who reported drowsiness. In line with previous work, these findings indicate an opportunity to detect and correct attentional shifts in individuals who need it most.","PeriodicalId":326513,"journal":{"name":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","volume":"20 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132329899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Histogram of oriented velocities for eye movement detection 眼动检测的方向速度直方图
Wolfgang Fuhl, Nora Castner, Enkelejda Kasneci
{"title":"Histogram of oriented velocities for eye movement detection","authors":"Wolfgang Fuhl, Nora Castner, Enkelejda Kasneci","doi":"10.1145/3279810.3279843","DOIUrl":"https://doi.org/10.1145/3279810.3279843","url":null,"abstract":"Research in various fields including psychology, cognition, and medical science deal with eye tracking data to extract information about the intention and cognitive state of a subject. For the extraction of this information, the detection of eye movement types is an important task. Modern eye tracking data is noisy and most of the state-of-the-art algorithms are not developed for all types of eye movements since they are still under research. We propose a novel feature for eye movement detection, which is called histogram of oriented velocities. The construction of the feature is similar to the well known histogram of oriented gradients from computer vision. Since the detector is trained using machine learning, it can always be extended to new eye movement types. We evaluate our feature against the state-of-the-art on publicly available data. The evaluation includes different machine learning approaches such as support vector machines, regression trees, and k nearest neighbors. We evaluate our feature together with the machine learning approaches for different parameter sets. We provide a matlab script for the computation and evaluation as well as an integration in EyeTrace which can be downloaded at http://www.ti.uni-tuebingen.de/Eyetrace.1751.0.html.","PeriodicalId":326513,"journal":{"name":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129202275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Symptoms of cognitive load in interactions with a dialogue system 与对话系统互动时的认知负荷症状
José Lopes, K. Lohan, H. Hastie
{"title":"Symptoms of cognitive load in interactions with a dialogue system","authors":"José Lopes, K. Lohan, H. Hastie","doi":"10.1145/3279810.3279851","DOIUrl":"https://doi.org/10.1145/3279810.3279851","url":null,"abstract":"Humans adapt their behaviour to the perceived cognitive load of their dialogue partner, for example, delaying non-essential information. We propose that spoken dialogue systems should do the same, particularly in high-stakes scenarios, such as emergency response. In this paper, we provide a summary of the prosodic, turn-taking and other linguistic symptoms of cognitive load analysed in the literature. We then apply these features to a single corpus in the restaurant-finding domain and propose new symptoms that are evidenced through interaction with the dialogue system, including utterance entropy, speech recognition confidence, as well as others based on dialogue acts.","PeriodicalId":326513,"journal":{"name":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128428160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data 从多模态数据建模认知过程研讨会论文集
{"title":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","authors":"","doi":"10.1145/3279810","DOIUrl":"https://doi.org/10.1145/3279810","url":null,"abstract":"","PeriodicalId":326513,"journal":{"name":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126872519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信