Companion Publication of the 2020 International Conference on Multimodal Interaction最新文献

筛选
英文 中文
WiFiTuned: Monitoring Engagement in Online Participation by Harmonizing WiFi and Audio wifitune:通过协调WiFi和音频来监测在线参与的参与度
Vijay Kumar Singh, Pragma Kar, Ayush Madhan Sohini, Madhav Rangaiah, Sandip Chakraborty, Mukulika Maity
{"title":"WiFiTuned: Monitoring Engagement in Online Participation by Harmonizing WiFi and Audio","authors":"Vijay Kumar Singh, Pragma Kar, Ayush Madhan Sohini, Madhav Rangaiah, Sandip Chakraborty, Mukulika Maity","doi":"10.1145/3577190.3614108","DOIUrl":"https://doi.org/10.1145/3577190.3614108","url":null,"abstract":"This paper proposes a multi-modal, non-intrusive and privacy preserving system WiFiTuned for monitoring engagement in online participation i.e., meeting/classes/seminars. It uses two sensing modalities i.e., WiFi CSI and audio for the same. WiFiTuned detects the head movements of participants during online participation through WiFi CSI and detects the speaker’s intent through audio. Then it correlates the two to detect engagement. We evaluate WiFiTuned with 22 participants and observe that it can detect the engagement level with an average accuracy of more than .","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting When the Mind Wanders Off Task in Real-time: An Overview and Systematic Review 实时检测思维何时偏离任务:概述和系统回顾
Vishal Kuvar, Julia W. Y. Kam, Stephen Hutt, Caitlin Mills
{"title":"Detecting When the Mind Wanders Off Task in Real-time: An Overview and Systematic Review","authors":"Vishal Kuvar, Julia W. Y. Kam, Stephen Hutt, Caitlin Mills","doi":"10.1145/3577190.3614126","DOIUrl":"https://doi.org/10.1145/3577190.3614126","url":null,"abstract":"Research on the ubiquity and consequences of task-unrelated thought (TUT; often used to operationalize mind wandering) in several domains recently sparked a surge in efforts to create “stealth measurements” of TUT using machine learning. Although these attempts have been successful, they have used widely varied algorithms, modalities, and performance metrics — making them difficult to compare and inform future work on best practices. We aim to synthesize these findings through a systematic review of 42 studies identified following PRISMA guidelines to answer two research questions: 1) are there any modalities that are better indicators of TUT than the rest; and 2) do multimodal models provide better results than unimodal models? We found that models built on gaze typically outperform other modalities and that multimodal models do not present a clear edge over their unimodal counterparts. Our review highlights the typical steps involved in model creation and the choices available in each step to guide future research, while also discussing the limitations of the current “state of the art” — namely the barriers to generalizability.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Fusion Interactions: A Study of Human and Automatic Quantification 多模态融合相互作用:人与自动量化研究
Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency
{"title":"Multimodal Fusion Interactions: A Study of Human and Automatic Quantification","authors":"Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency","doi":"10.1145/3577190.3614151","DOIUrl":"https://doi.org/10.1145/3577190.3614151","url":null,"abstract":"In order to perform multimodal fusion of heterogeneous signals, we need to understand their interactions: how each modality individually provides information useful for a task and how this information changes in the presence of other modalities. In this paper, we perform a comparative study of how humans annotate two categorizations of multimodal interactions: (1) partial labels, where different annotators annotate the label given the first, second, and both modalities, and (2) counterfactual labels, where the same annotator annotates the label given the first modality before asking them to explicitly reason about how their answer changes when given the second. We further propose an alternative taxonomy based on (3) information decomposition, where annotators annotate the degrees of redundancy: the extent to which modalities individually and together give the same predictions, uniqueness: the extent to which one modality enables a prediction that the other does not, and synergy: the extent to which both modalities enable one to make a prediction that one would not otherwise make using individual modalities. Through experiments and annotations, we highlight several opportunities and limitations of each approach and propose a method to automatically convert annotations of partial and counterfactual labels to information decomposition, yielding an accurate and efficient method for quantifying multimodal interactions.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASMRcade: Interactive Audio Triggers for an Autonomous Sensory Meridian Response ASMRcade:自主感觉经络反应的交互式音频触发器
Silvan Mertes, Marcel Strobl, Ruben Schlagowski, Elisabeth André
{"title":"ASMRcade: Interactive Audio Triggers for an Autonomous Sensory Meridian Response","authors":"Silvan Mertes, Marcel Strobl, Ruben Schlagowski, Elisabeth André","doi":"10.1145/3577190.3614155","DOIUrl":"https://doi.org/10.1145/3577190.3614155","url":null,"abstract":"Autonomous Sensory Meridian Response (ASMR) is a sensory phenomenon involving pleasurable tingling sensations in response to stimuli such as whispering, tapping, and hair brushing. It is increasingly used to promote health and well-being, help with sleep, and reduce stress and anxiety. ASMR triggers are both highly individual and of great variety. Consequently, finding or identifying suitable ASMR content, e.g., by searching online platforms, can take time and effort. This work addresses this challenge by introducing a novel interactive approach for users to generate personalized ASMR sounds. The presented system utilizes a generative adversarial network (GAN) for sound generation and a graphical user interface (GUI) for user control. Our system allows users to create and manipulate audio samples by interacting with a visual representation of the GAN’s latent input vector. Further, we present the results of a first user study which indicates that our approach is suitable for triggering ASMR experiences.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous Systems 面向自适应用户为中心的神经符号学习与自治系统的多模态交互
Amr Gomaa, Michael Feld
{"title":"Towards Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous Systems","authors":"Amr Gomaa, Michael Feld","doi":"10.1145/3577190.3616121","DOIUrl":"https://doi.org/10.1145/3577190.3616121","url":null,"abstract":"Recent advances in deep learning and data-driven approaches have facilitated the perception of objects and their environments in a perceptual subsymbolic manner. Thus, these autonomous systems can now perform object detection, sensor data fusion, and language understanding tasks. However, there is an increasing demand to further enhance these systems to attain a more conceptual and symbolic understanding of objects to acquire the underlying reasoning behind the learned tasks. Achieving this level of powerful artificial intelligence necessitates considering both explicit teachings provided by humans (e.g., explaining how to act) and implicit teaching obtained through observing human behavior (e.g., through system sensors). Hence, it is imperative to incorporate symbolic and subsymbolic learning approaches to support implicit and explicit interaction models. This integration enables the system to achieve multimodal input and output capabilities. In this Blue Sky paper, we argue for considering these input types, along with human-in-the-loop and incremental learning techniques, to advance the field of artificial intelligence and enable autonomous systems to emulate human learning. We propose several hypotheses and design guidelines aimed at achieving this objective.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Analysis and Assessment of Therapist Empathy in Motivational Interviews 动机性访谈中治疗师共情的多模态分析与评估
Trang Tran, Yufeng Yin, Leili Tavabi, Joannalyn Delacruz, Brian Borsari, Joshua D Woolley, Stefan Scherer, Mohammad Soleymani
{"title":"Multimodal Analysis and Assessment of Therapist Empathy in Motivational Interviews","authors":"Trang Tran, Yufeng Yin, Leili Tavabi, Joannalyn Delacruz, Brian Borsari, Joshua D Woolley, Stefan Scherer, Mohammad Soleymani","doi":"10.1145/3577190.3614105","DOIUrl":"https://doi.org/10.1145/3577190.3614105","url":null,"abstract":"The quality and effectiveness of psychotherapy sessions are highly influenced by the therapists’ ability to meaningfully connect with clients. Automated assessment of therapist empathy provides cost-effective and systematic means of assessing the quality of therapy sessions. In this work, we propose to assess therapist empathy using multimodal behavioral data, i.e. spoken language (text) and audio in real-world motivational interviewing (MI) sessions for alcohol abuse intervention. We first study each modality (text vs. audio) individually and then evaluate a multimodal approach using different fusion strategies for automated recognition of empathy levels (high vs. low). Leveraging recent pre-trained models both for text (DistilRoBERTa) and speech (HuBERT) as strong unimodal baselines, we obtain consistent 2-3 point improvements in F1 scores with early and late fusion, and the highest absolute improvement of 6–12 points over unimodal baselines. Our models obtain F1 scores of 68% when only looking at an early segment of the sessions and up to 72% in a therapist-dependent setting. In addition, our results show that a relatively small portion of sessions, specifically the second quartile, is most important in empathy prediction, outperforming predictions on later segments and on the full sessions. Our analyses in late fusion results show that fusion models rely more on the audio modality in limited-data settings, such as in individual quartiles and when using only therapist turns. Further, we observe the highest misclassification rates for parts of the sessions with MI inconsistent utterances (20% misclassified by all models), likely due to the complex nature of these types of intents in relation to perceived empathy.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Garments for Immersive Home Rehabilitation Using VR 使用VR的沉浸式家庭康复智能服装
Luz Alejandra Magre, Shirley Coyle
{"title":"Smart Garments for Immersive Home Rehabilitation Using VR","authors":"Luz Alejandra Magre, Shirley Coyle","doi":"10.1145/3577190.3614229","DOIUrl":"https://doi.org/10.1145/3577190.3614229","url":null,"abstract":"Adherence to a rehabilitation programme is vital to recover from injury, failing to do so can keep a promising athlete off the field permanently. Although the importance to follow their home exercise programme (HEP) is broadly explained to patients by their physicians, few of them actually complete it correctly. In my PhD research, I focus on factors that could help increase engagement in home exercise programmes for patients recovering from knee injuries using VR and wearable sensors. This will be done through the gamification of the rehabilitation process, designing the system with a user-centered design approach to test different interactions that could affect the engagement of the users.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of hand representation on a grasping task in augmented reality 手表征对增强现实中抓取任务的影响
Louis Lafuma, Guillaume Bouyer, Olivier Goguel, Jean-Yves Pascal Didier
{"title":"Influence of hand representation on a grasping task in augmented reality","authors":"Louis Lafuma, Guillaume Bouyer, Olivier Goguel, Jean-Yves Pascal Didier","doi":"10.1145/3577190.3614128","DOIUrl":"https://doi.org/10.1145/3577190.3614128","url":null,"abstract":"Research has shown that modifying the aspect of the virtual hand in immersive virtual reality can convey objects properties to users. Whether we can achieve the same results in augmented reality is still to be determined since the user’s real hand is visible through the headset. Although displaying a virtual hand in augmented reality is usually not recommended, it could positively impact the user effectiveness or appreciation of the application.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Annotations from speech and heart rate: impact on multimodal emotion recognition 语音和心率注释:对多模态情感识别的影响
Kaushal Sharma, Guillaume Chanel
{"title":"Annotations from speech and heart rate: impact on multimodal emotion recognition","authors":"Kaushal Sharma, Guillaume Chanel","doi":"10.1145/3577190.3614165","DOIUrl":"https://doi.org/10.1145/3577190.3614165","url":null,"abstract":"The focus of multimodal emotion recognition has often been on the analysis of several fusion strategies. However, little attention has been paid to the effect of emotional cues, such as physiological and audio cues, on external annotations used to generate the Ground Truths (GTs). In our study, we analyze this effect by collecting six continuous arousal annotations for three groups of emotional cues: speech only, heartbeat sound only and their combination. Our results indicate significant differences between the three groups of annotations, thus giving three distinct cue-specific GTs. The relevance of these GTs is estimated by training multimodal machine learning models to regress speech, heart rate and their multimodal fusion on arousal. Our analysis shows that a cue(s)-specific GT is better predicted by the corresponding modality(s). In addition, the fusion of several emotional cues for the definition of GTs allows to reach a similar performance for both unimodal models and multimodal fusion. In conclusion, our results indicates that heart rate is an efficient cue for the generation of a physiological GT; and that combining several emotional cues for GTs generation is as important as performing input multimodal fusion for emotion prediction.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIUnet: Asymptotic inference with U2-Net for referring image segmentation AIUnet:基于u2net的参考图像分割渐近推理
Jiangquan Li, Shimin Shan, Yu Liu, Kaiping Xu, Xiwen Hu, Mingcheng Xue
{"title":"AIUnet: Asymptotic inference with U2-Net for referring image segmentation","authors":"Jiangquan Li, Shimin Shan, Yu Liu, Kaiping Xu, Xiwen Hu, Mingcheng Xue","doi":"10.1145/3577190.3614176","DOIUrl":"https://doi.org/10.1145/3577190.3614176","url":null,"abstract":"Referring image segmentation aims to segment a target object from an image by providing a natural language expression. While recent methods have made remarkable advancements, few have designed effective deep fusion processes for cross-model features or focused on the fine details of vision. In this paper, we propose AIUnet, an asymptotic inference method that uses U2-Net. The core of AIUnet is a Cross-model U2-Net (CMU) module, which integrates a Text guide vision (TGV) module into U2-Net, achieving efficient interaction of cross-model information at different scales. CMU focuses more on location information in high-level features and learns finer detail information in low-level features. Additionally, we propose a Features Enhance Decoder (FED) module to improve the recognition of fine details and decode cross-model features to binary masks. The FED module leverages a simple CNN-based approach to enhance multi-modal features. Our experiments show that AIUnet achieved competitive results on three standard datasets.Code is available at https://github.com/LJQbiu/AIUnet.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信