GazeIn '14最新文献

筛选
英文 中文
Fusing Multimodal Human Expert Data to Uncover Hidden Semantics 融合多模态人类专家数据揭示隐藏语义
GazeIn '14 Pub Date : 2014-11-16 DOI: 10.1145/2666642.2666649
Xuan Guo, Qi Yu, Rui Li, Cecilia Ovesdotter Alm, Anne R. Haake
{"title":"Fusing Multimodal Human Expert Data to Uncover Hidden Semantics","authors":"Xuan Guo, Qi Yu, Rui Li, Cecilia Ovesdotter Alm, Anne R. Haake","doi":"10.1145/2666642.2666649","DOIUrl":"https://doi.org/10.1145/2666642.2666649","url":null,"abstract":"Problem solving in complex visual domains involves multiple levels of cognitive processing. Analyzing and representing these cognitive processes requires the elicitation and study of multimodal human data. We have developed methods for extracting experts' visual behaviors and verbal descriptions during medical image inspection. Now we address fusion of these data towards building a novel framework for organizing elements of expertise as a foundation for knowledge-dependent computational systems. In this paper, a multimodal graph-regularized non-negative matrix factorization approach is developed and used to fuse multimodal data collected during medical image inspection. Our experimental results on new data representation demonstrate the effectiveness of the proposed data fusion approach.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134538946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Analysis of Timing Structure of Eye Contact in Turn-changing 换向时眼神接触的时序结构分析
GazeIn '14 Pub Date : 2014-11-16 DOI: 10.1145/2666642.2666648
Ryo Ishii, K. Otsuka, Shiro Kumano, Junji Yamato
{"title":"Analysis of Timing Structure of Eye Contact in Turn-changing","authors":"Ryo Ishii, K. Otsuka, Shiro Kumano, Junji Yamato","doi":"10.1145/2666642.2666648","DOIUrl":"https://doi.org/10.1145/2666642.2666648","url":null,"abstract":"With the aim of constructing a model for predicting the next speaker and the start of the next utterance in multi-party meetings, we focus on the timing structure of the eye contact between the speaker, the listener, and the next speaker: who looks at whom first, who looks away first, and when the eye contact happens. We analyze the differences in the timing structure for the listener and next speaker in turn-changing and turn-keeping. The results of analysis show that the listeners in turn-keeping tend to look at the speaker more often first before the speaker looks at the listeners than the next speaker in turn-changing looks at the speaker first before the speaker looks at the next speaker when the eye contact with the speaker happens. The listeners in turn-keeping tend to look away from the speaker more often later after the speaker looks away from the listener than listeners and the next speaker in turn-changing looks away from the speaker later when the eye contact with the speaker happens. In addition, the interval between the end of eye contact, the end of the speaker's utterance, and the start of next speaker's utterance is different between the listener in turn-keeping, the listener in turn-changing, and the next speaker in turn-changing.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132225046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Spatio-Temporal Event Selection in Basic Surveillance Tasks using Eye Tracking and EEG 利用眼动跟踪和脑电图在基本监控任务中进行时空事件选择
GazeIn '14 Pub Date : 2014-11-16 DOI: 10.1145/2666642.2666645
Jutta Hild, F. Putze, David Kaufman, Christian Kühnle, Tanja Schultz, J. Beyerer
{"title":"Spatio-Temporal Event Selection in Basic Surveillance Tasks using Eye Tracking and EEG","authors":"Jutta Hild, F. Putze, David Kaufman, Christian Kühnle, Tanja Schultz, J. Beyerer","doi":"10.1145/2666642.2666645","DOIUrl":"https://doi.org/10.1145/2666642.2666645","url":null,"abstract":"In safety- and security-critical applications like video surveillance it is crucial that human operators detect task-relevant events in the continuous video streams and select them for report or dissemination to other authorities. Usually, the selection operation is performed using a manual input device like a mouse or a joystick. Due to the visually rich and dynamic input, the required high attention, the long working time, and the challenging manual selection of moving objects, it occurs that relevant events are missed. To alleviate this problem we propose adding another event selection process, using eye-brain input. Our approach is based on eye tracking and EEG, providing spatio-temporal event selection without any manual intervention. We report ongoing research, building on prior work where we showed the general feasibility of the approach. In this contribution, we extend our work testing the feasibility of the approach using more advanced and less artificial experimental paradigms simulating frequently occurring, basic types of real surveillance tasks. The paradigms are much closer to a real surveillance task in terms of the used visual stimuli, the more subtle cues for event indication, and the required viewing behavior. As a methodology we perform an experiment (N=10) with non-experts. The results confirm the feasibility of the approach for event selection in the advanced tasks. We achieve spatio-temporal event selection accuracy scores of up to 77% and 60% for different stages of event indication.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126097113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Evaluating the Impact of Embodied Conversational Agents (ECAs) Attentional Behaviors on User Retention of Cultural Content in a Simulated Mobile Environment 在模拟移动环境中评估具身会话代理(eca)注意行为对用户保留文化内容的影响
GazeIn '14 Pub Date : 2014-11-16 DOI: 10.1145/2666642.2666650
Ioannis Doumanis, Serengul Smith
{"title":"Evaluating the Impact of Embodied Conversational Agents (ECAs) Attentional Behaviors on User Retention of Cultural Content in a Simulated Mobile Environment","authors":"Ioannis Doumanis, Serengul Smith","doi":"10.1145/2666642.2666650","DOIUrl":"https://doi.org/10.1145/2666642.2666650","url":null,"abstract":"The paper presents an evaluation study of the impact of an ECA's attentional behaviors using a custom research method that combines facial expression analysis, eye-tracking and a retention test. The method provides additional channels to EEG-based methods (e.g., [8]) for the study of user attention and emotions. In order to validate the proposed approach, two tour guide applications were created with an embodied conversational agent (ECA) that presents cultural content about a real-tourist attraction. The agent simulates two attention-grabbing mechanisms - humorous and serious to attract the users' attention. A formal study was conducted to compare two tour guide applications in the lab. The data collected from the facial expression analysis and eye-tracking helped to explain particularly good and bad performances in retention tests. In terms of the study results, strong quantitative and qualitative evidence was found that an ECA should not attract more attention to itself than necessary, to avoid becoming a distraction from the flow of the content. It was also found that the ECA had an inverse effect on the retention performance of participants with different gender and their use on computer interfaces is not a good idea for elderly users.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131654480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Study on Participant-controlled Eye Tracker Calibration Procedure 参与者控制眼动仪校准程序的研究
GazeIn '14 Pub Date : 2014-11-16 DOI: 10.1145/2666642.2666646
P. Kasprowski, Katarzyna Harężlak
{"title":"Study on Participant-controlled Eye Tracker Calibration Procedure","authors":"P. Kasprowski, Katarzyna Harężlak","doi":"10.1145/2666642.2666646","DOIUrl":"https://doi.org/10.1145/2666642.2666646","url":null,"abstract":"The analysis of an eye movement signal, which can reveal o lot of information about the way human brain works, has recently attracted the attention of many researchers. The basis for such studies is data returned by specialized devices called eye-trackers. The first step of their usage is a calibration process, allowing to reflect an eye position to a point of regard. The main research problem analyzed in this paper is to check whether and how the chosen calibration scenario influences the calibration result (calibration errors). Based on this analysis of possible scenarios, a new user-controlled calibration procedure was developed. It was checked and compared with a classic approach during pilot studies using the Eye Tribe system as an eye-tracker device. The results obtained for both methods were examined in terms of provided accuracy.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123512559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analyzing Co-occurrence Patterns of Nonverbal Behaviors in Collaborative Learning 协作学习中非语言行为的共现模式分析
GazeIn '14 Pub Date : 2014-11-16 DOI: 10.1145/2666642.2666651
Sakiko Nihonyanagi, Yuki Hayashi, Y. Nakano
{"title":"Analyzing Co-occurrence Patterns of Nonverbal Behaviors in Collaborative Learning","authors":"Sakiko Nihonyanagi, Yuki Hayashi, Y. Nakano","doi":"10.1145/2666642.2666651","DOIUrl":"https://doi.org/10.1145/2666642.2666651","url":null,"abstract":"In collaborative learning, participants work on the learning task together. In this environment, linguistic information via speech as well as non-verbal information such as gaze and writing actions are important elements. It is expected that integrating the information from these behaviors will contribute to assessing the learning activity and characteristics of each participant in a more objective manner. With the objective of characterizing participants in the collaborative learning activity, this study analyzed the verbal and nonverbal behaviors and found that the gaze behaviors of individual participants and those between the participants provides useful information in distinguishing a leader of the group, one who follows the leader, or one who attends to other participants who do not appear to understand.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129648802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gaze-Based Virtual Task Predictor 基于注视的虚拟任务预测器
GazeIn '14 Pub Date : 2014-11-16 DOI: 10.1145/2666642.2666647
Çagla Çig, T. M. Sezgin
{"title":"Gaze-Based Virtual Task Predictor","authors":"Çagla Çig, T. M. Sezgin","doi":"10.1145/2666642.2666647","DOIUrl":"https://doi.org/10.1145/2666642.2666647","url":null,"abstract":"Pen-based systems promise an intuitive and natural interaction paradigm for tablet PCs and stylus-enabled phones. However, typical pen-based interfaces require users to switch modes frequently in order to complete ordinary tasks. Mode switching is usually achieved through hard or soft modifier keys, buttons, and soft-menus. Frequent invocation of these auxiliary mode switching elements goes against the goal of intuitive, fluid, and natural interaction. In this paper, we present a gaze-based virtual task prediction system that has the potential to alleviate dependence on explicit mode switching in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 80% success rate with the aid of users' natural eye gaze behavior during pen-only interaction.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121025922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Attention and Gaze in Situated Language Interaction 情境语言互动中的注意和凝视
GazeIn '14 Pub Date : 2014-11-16 DOI: 10.1145/2666642.2666643
D. Bohus
{"title":"Attention and Gaze in Situated Language Interaction","authors":"D. Bohus","doi":"10.1145/2666642.2666643","DOIUrl":"https://doi.org/10.1145/2666642.2666643","url":null,"abstract":"The ability to engage in natural language interaction in physically situated settings hinges on a set of competencies such as managing conversational engagement, turn taking, understanding, language and behavior generation, and interaction planning. In human-human interaction these are mixed-initiative, collaborative processes, that often involve a wide array of finely coordinated verbal and non-verbal actions. Eye gaze, and more generally attention, among many other channels, play a fundamental role. In this talk, I will discuss samples of research work we have conducted over the last few years on developing models for supporting physically situated dialog in relatively unconstrained environments. Throughout, I will highlight the role that gaze and attention play in these models. I will discuss and showcase several prototype systems that we have developed, and describe opportunities for reasoning about, interpreting and producing gaze signals in support of fluid, seamless spoken language interaction.","PeriodicalId":230150,"journal":{"name":"GazeIn '14","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124145281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信