Proceedings of the 25th International Conference on Intelligent User Interfaces Companion最新文献

筛选
英文 中文
Detecting Learner Drowsiness Based on Facial Expressions and Head Movements in Online Courses 在线课程中基于面部表情和头部运动的学习者睡意检测
Shogo Terai, Shizuka Shirai, Mehrasa Alizadeh, Ryosuke Kawamura, Noriko Takemura, Yuuki Uranishi, H. Takemura, H. Nagahara
{"title":"Detecting Learner Drowsiness Based on Facial Expressions and Head Movements in Online Courses","authors":"Shogo Terai, Shizuka Shirai, Mehrasa Alizadeh, Ryosuke Kawamura, Noriko Takemura, Yuuki Uranishi, H. Takemura, H. Nagahara","doi":"10.1145/3379336.3381500","DOIUrl":"https://doi.org/10.1145/3379336.3381500","url":null,"abstract":"Drowsiness is a major factor that hinders learning. To improve learning efficiency, it is important to understand students' physical status such as wakefulness during online coursework. In this study, we have proposed a drowsiness estimation method based on learners' head and facial movements while viewing video lectures. To examine the effectiveness of head and facial movements in drowsiness estimation, we collected learner video data recorded during e-learning and applied a deep learning approach under the following conditions: (a) using only facial movement data, (b) using only head movement data, and (c) using both facial and head movement data. We achieved an average F1-macro score of 0.74 in personalized models for detecting learner drowsiness using both facial and head movement data.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124828925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Multimodal Meeting Browser that Implements an Important Utterance Detection Model based on Multimodal Information 实现基于多模态信息的重要话语检测模型的多模态会议浏览器
Fumio Nihei, Y. Nakano
{"title":"A Multimodal Meeting Browser that Implements an Important Utterance Detection Model based on Multimodal Information","authors":"Fumio Nihei, Y. Nakano","doi":"10.1145/3379336.3381491","DOIUrl":"https://doi.org/10.1145/3379336.3381491","url":null,"abstract":"This paper proposes a multimodal meeting browser with a CNN model that estimates important utterances based on co-occurrence of verbal and nonverbal behaviors in multi-party conversations. The proposed browser was designed to visualize important utterances and to make it easier to observe the nonverbal behaviors of the conversation participants. A user study was conducted to examine whether the proposed browser supports the user to correctly understand the content of the discussion. By comparing a text-based browser and a simple video player, it was found that the proposed browser was more efficient than the video player and allowed the user to obtain a more accurate understanding of the discussion than the text-based browser.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129362424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing Semantic Analysis of Text Data with Time and Importance through an Interactive Exploratory System 基于交互式探索系统的具有时间和重要性的文本数据语义可视化分析
Jaejong Ho, Hyeonsik Gong, Kyungwon Lee
{"title":"Visualizing Semantic Analysis of Text Data with Time and Importance through an Interactive Exploratory System","authors":"Jaejong Ho, Hyeonsik Gong, Kyungwon Lee","doi":"10.1145/3379336.3381485","DOIUrl":"https://doi.org/10.1145/3379336.3381485","url":null,"abstract":"The main purpose of this visualization is for developing a visualization system to help users navigate their text data so that users can easily identify the main topics of data. In addition, the visualization system allows users to set options, such as text data processing pipelines, or provides proper interactions, giving them more flexible and diverse user experience of viewing data. Users are able to identify topic keywords and the distribution of clustered text data with a hexagonal view. They can zoom the view for detailed topic of the data distribution. Also, after dragging areas or selecting internal hexagons, they cannot only grasp the topic change over time of the selected data, but can also understand the relationship between the topic keywords. Furthermore, they can compare the main keywords by each cluster of selected data.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127892272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-Mediated Communications 开发一种在计算机媒介通信中将象征性手势映射到类似表情符号的手势识别系统
J. I. Koh
{"title":"Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-Mediated Communications","authors":"J. I. Koh","doi":"10.1145/3379336.3381507","DOIUrl":"https://doi.org/10.1145/3379336.3381507","url":null,"abstract":"Recent trends in computer-mediated communications (CMC) have seen messaging with richer media not only in images and videos, but in visual communication markers (VCM) such as emoticons, emojis, and stickers. VCMs could prevent a potential loss of subtle emotional conversation in CMC, which is delivered by nonverbal cues that convey affective and emotional information. However, as the number of VCMs grows in the selection set, the problem of VCM entry needs to be addressed. Furthermore, conventional means of accessing VCMs continue to rely on input entry methods that are not directly and intimately tied to expressive nonverbal cues. In this work, we aim to address this issue, by facilitating the use of an alternative form of VCM entry: hand gestures. To that end, we propose a user-defined hand gesture set that is highly representative of a number of VCMs and a two-stage hand gesture recognition system (trajectory-based, shape-based) that can identify these user-defined hand gestures with an accuracy of 82%. By developing such a system, we aim to allow people using low-bandwidth forms of CMCs to still enjoy their convenient and discreet properties, while also allowing them to experience more of the intimacy and expressiveness of higher-bandwidth online communication.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127354183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Explaining Black Box Models Through Twin Systems 通过双系统解释黑匣子模型
Federico Maria Cau
{"title":"Explaining Black Box Models Through Twin Systems","authors":"Federico Maria Cau","doi":"10.1145/3379336.3381511","DOIUrl":"https://doi.org/10.1145/3379336.3381511","url":null,"abstract":"This paper presents the early stages of my PhD research aiming at advancing the field of eXplainable AI (XAI) investigating the twinsystems, where an uninterpretable black-box model is twinned with a white-box one, usually less accurate but more inspectable, to provide explanations to the classification results.We focus in particular on the twinning occurring between an Artificial Neural Network (ANN) and a Case-Based Reasoning (CBR) system, so-called ANNCBR twins, to explain the predictions in a post-hoc manner taking account of (i) a feature-weighting method for mirroring the ANN results in the CBR, (ii) a set of evaluation metrics that correlate the ANN to other white/grey models supporting explanations for users, (iii) a taxonomy of methods for generating explanations from the twinning for the neural network's predictions.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123211286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DLS Magician: Promoting Early-Stage Collaboration by Automating UI Design Process in an E&P Environment DLS魔术师:在勘探开发环境中通过自动化UI设计过程促进早期协作
Jiajing Guo, Zhen Li, Stanislaus Ju, Monisha Manoharan, Adelle Knight
{"title":"DLS Magician: Promoting Early-Stage Collaboration by Automating UI Design Process in an E&P Environment","authors":"Jiajing Guo, Zhen Li, Stanislaus Ju, Monisha Manoharan, Adelle Knight","doi":"10.1145/3379336.3381462","DOIUrl":"https://doi.org/10.1145/3379336.3381462","url":null,"abstract":"In this work, we present a prototype of an intelligent system that can automate the UI design process via converting text descriptions into interactive design prototypes. We conducted user research in an international oilfield services company, and found that product owners prefer to validate their hypotheses via visual mockups rather than text descriptions; however, many of them need assistance from designers to produce the visual mockups. Based on this finding and after exploring multiple possibilities using design thinking, we chose a solution that uses natural language processing (NLP) to automate the visual design process. To validate the answer, we conducted user tests via means and iterated the solution. In the future, we expect the work can be fully deployed in a working environment to help product owners initiate their projects faster.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128038958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Deep Unsupervised Activity Visualization using Head and Eye Movements 使用头部和眼睛运动的深度无监督活动可视化
J. Yamashita, Yoshiaki Takimoto, Hidetaka Koya, Haruo Oishi, T. Kumada
{"title":"Deep Unsupervised Activity Visualization using Head and Eye Movements","authors":"J. Yamashita, Yoshiaki Takimoto, Hidetaka Koya, Haruo Oishi, T. Kumada","doi":"10.1145/3379336.3381503","DOIUrl":"https://doi.org/10.1145/3379336.3381503","url":null,"abstract":"We propose a method of visualizing user activities based on user's head and eye movements. Since we use an unobtrusive eyewear sensor, the measurement scene is unconstrained. In addition, due to the unsupervised end-to-end deep algorithm, users can discover unanticipated activities based on the exploratory analysis of low-dimensional representation of sensor data. We also suggest the novel regularization that makes the representation person invariant.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132736422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concentration Estimation in E-Learning Based on Learner's Facial Reaction to Teacher's Action 基于学习者对教师动作的面部反应的在线学习注意力估计
Ryosuke Kawamura, Kentaro Murase
{"title":"Concentration Estimation in E-Learning Based on Learner's Facial Reaction to Teacher's Action","authors":"Ryosuke Kawamura, Kentaro Murase","doi":"10.1145/3379336.3381487","DOIUrl":"https://doi.org/10.1145/3379336.3381487","url":null,"abstract":"In video-based learning, estimating the level of concentration is important for increasing the efficiency of learning. Facial expressions during learning obtained with a Web camera are often used to estimate concentration because cameras are easy to install. In this work, we focus on how learners react to video contents and propose a new method which is based on the Jaccard coefficient calculated from learner's facial reactions to teacher's actions. We conduct experiments and collect data in a Japanese cram school. Analysis of our collected data shows a weighted-F1 score of 0.57 for four levels of concentration classification, which is higher than the accuracy obtained with the methods based on learner's facial expression alone. The results indicate that our method can be effective for concentration estimation in an actual learning environment.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128469187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Adaptive and Personalized In-Vehicle Human-Machine-Interface for an Improved User Experience 一个自适应和个性化的车载人机界面,以改善用户体验
Guillermo Reyes
{"title":"An Adaptive and Personalized In-Vehicle Human-Machine-Interface for an Improved User Experience","authors":"Guillermo Reyes","doi":"10.1145/3379336.3381882","DOIUrl":"https://doi.org/10.1145/3379336.3381882","url":null,"abstract":"Human Machine Interfaces (HMIs) enable the communication between humans and machines. In the automotive domain, all in-vehicle systems used to be independent. Today they are more and more interconnected and interdependent. However, they still don't act in unison to help drivers achieve their individual goals. More specifically, even though, some current HMIs provide a certain degree of personalization, they don't adapt dynamically to the situation and don't learn driver-specific nuances in order to improve the driver's user experience.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"95 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129119583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Interactive Generation and Customization of Travel Packages for Individuals and Groups 个人和团体旅游套餐的交互式生成和定制
S. Amer-Yahia, R. M. Borromeo, Shady Elbassuoni, Behrooz Omidvar-Tehrani, Sruthi Viswanathan
{"title":"Interactive Generation and Customization of Travel Packages for Individuals and Groups","authors":"S. Amer-Yahia, R. M. Borromeo, Shady Elbassuoni, Behrooz Omidvar-Tehrani, Sruthi Viswanathan","doi":"10.1145/3379336.3381456","DOIUrl":"https://doi.org/10.1145/3379336.3381456","url":null,"abstract":"We demonstrate SIMURGH, an interactive framework for generating customized travel packages (TPs) for individuals or for groups of travelers. This is beneficial in various use cases such as tourism planning and advertisement. Simurgh relies on gathering preferences of travelers and solving an optimization problem to generate personalized travel packages. SIMURGH goes beyond personalization by allowing travelers to customize travel packages via simple-yet-powerful interaction operators.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信