26th International Conference on Intelligent User Interfaces - Companion最新文献

筛选
英文 中文
Stress Detection by Machine Learning and Wearable Sensors 基于机器学习和可穿戴传感器的应力检测
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450732
Prerna Garg, Jayasankar Santhosh, A. Dengel, Shoya Ishimaru
{"title":"Stress Detection by Machine Learning and Wearable Sensors","authors":"Prerna Garg, Jayasankar Santhosh, A. Dengel, Shoya Ishimaru","doi":"10.1145/3397482.3450732","DOIUrl":"https://doi.org/10.1145/3397482.3450732","url":null,"abstract":"Mental states like stress, depression, and anxiety have become a huge problem in our modern society. The main objective of this work is to detect stress among people, using Machine Learning approaches with the final aim of improving their quality of life. We propose various Machine Learning models for the detection of stress on individuals using a publicly available multimodal dataset, WESAD. Sensor data including electrocardiogram (ECG), body temperature (TEMP), respiration (RESP), electromyogram (EMG), and electrodermal activity (EDA) are taken for three physiological conditions - neutral (baseline), stress and amusement. The F1-score and accuracy for three-class (amusement vs. baseline vs. stress) and binary (stress vs. non-stress) classifications were computed and compared using machine learning techniques like k-NN, Linear Discriminant Analysis, Random Forest, AdaBoost, and Support Vector Machine. For both binary classification and three-class classification, the Random Forest model outperformed other models with F1-scores of 83.34 and 65.73 respectively.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130323623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
ARCoD: An Augmented Reality Serious Game to Identify Cognitive Distortion ARCoD:一款识别认知扭曲的增强现实严肃游戏
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450723
Rifat Ara Tasnim, Farjana Z. Eishita
{"title":"ARCoD: An Augmented Reality Serious Game to Identify Cognitive Distortion","authors":"Rifat Ara Tasnim, Farjana Z. Eishita","doi":"10.1145/3397482.3450723","DOIUrl":"https://doi.org/10.1145/3397482.3450723","url":null,"abstract":"The widespread presence of mental disorders is increasing at an alarming rate around the globe. According to World Health Organization (WHO), mental health circumstances have worsened all over the world due to the COVID-19 pandemic. In spite of the existence of effective psychotherapy strategies, a significant percentage of individuals do not get access to mental healthcare facilities. Under these circumstances, technologies such as Augmented Reality (AR) and its availability in handheld devices can unveil an expansive opportunity to utilize these features in fields of mental health treatment via digital gaming. In this paper, we have proposed a serious game embedding smart Augmented Reality (AR) technology to identify the Cognitive Distortions of the individual playing the game. Later, a comprehensive analysis of clinical impact of the AR gaming on mental health treatment will be conducted followed by evaluation of Player Experience (PX).","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131426192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User-Controlled Content Translation in Social Media 社交媒体中用户控制的内容翻译
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450714
A. Gupta
{"title":"User-Controlled Content Translation in Social Media","authors":"A. Gupta","doi":"10.1145/3397482.3450714","DOIUrl":"https://doi.org/10.1145/3397482.3450714","url":null,"abstract":"As it has become increasingly common for social network users to write and view post in languages other than English, most social networks now provide machine translations to allow posts to be read by an audience beyond native speakers. However, authors typically cannot view the translations of their posts and have little control over these translations. To address this issue, I am developing a prototype that will provide authors with transparency of and more personalized control over the translation of their posts.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115531730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TExSS: Transparency and Explanations in Smart Systems 智能系统中的透明度和解释
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450705
Alison Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y. Lim, T. Kuflik, Advait Sarkar, Avital Shulner-Tal, S. Stumpf
{"title":"TExSS: Transparency and Explanations in Smart Systems","authors":"Alison Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y. Lim, T. Kuflik, Advait Sarkar, Avital Shulner-Tal, S. Stumpf","doi":"10.1145/3397482.3450705","DOIUrl":"https://doi.org/10.1145/3397482.3450705","url":null,"abstract":"Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131966629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Over-sketching Operation to Realize Geometrical and Topological Editing across Multiple Objects in Sketch-based CAD Interface 在基于草图的CAD界面中实现多对象几何拓扑编辑的过绘操作
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450735
Tomohiko Ito, Teruyoshi Kaneko, Yoshiki Tanaka, S. Saga
{"title":"Over-sketching Operation to Realize Geometrical and Topological Editing across Multiple Objects in Sketch-based CAD Interface","authors":"Tomohiko Ito, Teruyoshi Kaneko, Yoshiki Tanaka, S. Saga","doi":"10.1145/3397482.3450735","DOIUrl":"https://doi.org/10.1145/3397482.3450735","url":null,"abstract":"We developed a new general-purpose sketch-based interface for use in two-dimensional computer-aided design (CAD) systems. In this interface, a sketch-based editing operation is used to modify the geometry and topology of multiple geometric objects via over-sketching. The interface was developed by inheriting a fuzzy logic-based strategy of the existing sketch-based interface SKIT (SKetch Input Tracer). Using this interface, a user can make drawings in a creative manner; e.g., they can start with a rough sketch and progressively achieve a detailed design while repeating the over-sketches.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"71 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114034608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
KUMITRON: Artificial Intelligence System to Monitor Karate Fights that Synchronize Aerial Images with Physiological and Inertial Signals 人工智能系统监控空手道战斗,使空中图像与生理和惯性信号同步
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450730
J. Echeverria, O. Santos
{"title":"KUMITRON: Artificial Intelligence System to Monitor Karate Fights that Synchronize Aerial Images with Physiological and Inertial Signals","authors":"J. Echeverria, O. Santos","doi":"10.1145/3397482.3450730","DOIUrl":"https://doi.org/10.1145/3397482.3450730","url":null,"abstract":"New technologies make it possible to develop tools that allow more efficient and personalized interaction in unsuspected areas such as martial arts. From the point of view of the modelling of human movement in relation to the learning of complex motor skills, martial arts are of interest because they are articulated around a system of movements that are predefined -or at least, bounded- and governed by the Laws of Physics. Their execution must be learned after continuous practice over time. Artificial Intelligence algorithms can be used to obtain motion patterns that can be used to compare a learners’ practice against the execution of an expert, as well as to analyse its temporal evolution during learning. In this paper we introduce KUMITRON, which collects motion data from wearable sensors and integrates computer vision and machine learning algorithms to help karate practitioners improve their skills in combat. The current version focuses on using the computer vision algorithms to identify the anticipation of the opponent's movements. This information is computed in real time and can be communicated to the learner together with a recommendation of the type of strategy to use in the combat.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"433 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116997457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
XNLP: A Living Survey for XAI Research in Natural Language Processing XNLP:自然语言处理中XAI研究的活综述
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450728
Kun Qian, Marina Danilevsky, Yannis Katsis, B. Kawas, Erick Oduor, Lucian Popa, Yunyao Li
{"title":"XNLP: A Living Survey for XAI Research in Natural Language Processing","authors":"Kun Qian, Marina Danilevsky, Yannis Katsis, B. Kawas, Erick Oduor, Lucian Popa, Yunyao Li","doi":"10.1145/3397482.3450728","DOIUrl":"https://doi.org/10.1145/3397482.3450728","url":null,"abstract":"We present XNLP: an interactive browser-based system embodying a living survey of recent state-of-the-art research in the field of Explainable AI (XAI) within the domain of Natural Language Processing (NLP). The system visually organizes and illustrates XAI-NLP publications and distills their content to allow users to gain insights, generate ideas, and explore the field. We hope that XNLP can become a leading demonstrative example of a living survey, balancing the depth and quality of a traditional well-constructed survey paper with the collaborative dynamism of a widely available interactive tool. XNLP can be accessed at: https://xainlp2020.github.io/xainlp.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"6 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Tutorial: Human-Centered AI: Reliable, Safe and Trustworthy 教程:以人为本的AI:可靠、安全、值得信赖
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3453994
B. Shneiderman
{"title":"Tutorial: Human-Centered AI: Reliable, Safe and Trustworthy","authors":"B. Shneiderman","doi":"10.1145/3397482.3453994","DOIUrl":"https://doi.org/10.1145/3397482.3453994","url":null,"abstract":"This 3-hour tutorial proposes a new synthesis, in which Artificial Intelligence (AI) algorithms are combined with human-centered thinking to make Human-Centered AI (HCAI). This approach combines research on AI algorithms with user experience design methods to shape technologies that amplify, augment, empower, and enhance human performance. Researchers and developers for HCAI systems value meaningful human control, putting people first by serving human needs, values, and goals.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
TIEVis: a Visual Analytics Dashboard for Temporal Information Extracted from Clinical Reports TIEVis:从临床报告中提取时间信息的可视化分析仪表板
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450731
Robin De Croon, A. Leeuwenberg, J. Aerts, Marie-Francine Moens, Vero Vanden Abeele, K. Verbert
{"title":"TIEVis: a Visual Analytics Dashboard for Temporal Information Extracted from Clinical Reports","authors":"Robin De Croon, A. Leeuwenberg, J. Aerts, Marie-Francine Moens, Vero Vanden Abeele, K. Verbert","doi":"10.1145/3397482.3450731","DOIUrl":"https://doi.org/10.1145/3397482.3450731","url":null,"abstract":"Clinical reports, as unstructured texts, contain important temporal information. However, it remains a challenge for natural language processing (NLP) models to accurately combine temporal cues into a single coherent temporal ordering of described events. In this paper, we present TIEVis, a visual analytics dashboard that visualizes event-timelines extracted from clinical reports. We present the findings of a pilot study in which healthcare professionals explored and used the dashboard to complete a set of tasks. Results highlight the importance of seeing events in their context, and the ability to manually verify and update critical events in a patient history, as a basis to increase user trust.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114338278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
OYaYa: A Desktop Robot Enabling Multimodal Interaction with Emotions OYaYa:一个能够与情绪进行多模式交互的桌面机器人
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450729
Yucheng Jin, Yu Deng, Jiangtao Gong, Xi Wan, Ge Gao, Qianying Wang
{"title":"OYaYa: A Desktop Robot Enabling Multimodal Interaction with Emotions","authors":"Yucheng Jin, Yu Deng, Jiangtao Gong, Xi Wan, Ge Gao, Qianying Wang","doi":"10.1145/3397482.3450729","DOIUrl":"https://doi.org/10.1145/3397482.3450729","url":null,"abstract":"We demonstrate a desktop robot OYaYa that imitates users’ emotional facial expressions and helps users manage emotions. Multiple equipped sensors in OYaYa enable multimodal interaction; for example, it recognizes users’ emotions from facial expressions and speeches. Besides, a dashboard illustrates how users interact with OYaYa and how their emotions change. We expect that OYaYa allows users to manage their emotions in a fun way.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134313536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信