M3Sense: Affect-Agnostic Multitask Representation Learning Using Multimodal Wearable Sensors

Sirat Samyoun, Md. Mofijul Islam, Tariq Iqbal, J. Stankovic
{"title":"M3Sense: Affect-Agnostic Multitask Representation Learning Using Multimodal Wearable Sensors","authors":"Sirat Samyoun, Md. Mofijul Islam, Tariq Iqbal, J. Stankovic","doi":"10.1145/3534600","DOIUrl":null,"url":null,"abstract":"Modern smartwatches or wrist wearables having multiple physiological sensing modalities have emerged as a subtle way to detect different mental health conditions, such as anxiety, emotions, and stress. However, affect detection models depending on wrist sensors data often provide poor performance due to inconsistent or inaccurate signals and scarcity of labeled data representing a condition. Although learning representations based on the physiological similarities of the affective tasks offer a possibility to solve this problem, existing approaches fail to effectively generate representations that will work across these multiple tasks. Moreover, the problem becomes more challenging due to the large domain gap among these affective applications and the discrepancies among the multiple sensing modalities. We present M3Sense, a multi-task, multimodal representation learning framework that effectively learns the affect-agnostic physiological representations from limited labeled data and uses a novel domain alignment technique to utilize the unlabeled data from the other affective tasks to accurately detect these mental health conditions using wrist sensors only. We apply M3Sense to 3 mental health applications, and quantify the achieved performance boost compared to the state-of-the-art using extensive evaluations and ablation studies on publicly available and collected datasets. Moreover, we extensively investigate what combination of tasks and modalities aids in developing a robust Multitask Learning model for affect recognition. Our analysis shows that incorporating emotion detection in the learning models degrades the performance of anxiety and stress detection, whereas stress detection helps to boost the emotion detection performance. Our results also show that M3Sense provides consistent performance across all affective tasks and available modalities and also improves the performance of representation learning models on unseen affective tasks by 5% − 60%.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"249 1","pages":"73:1-73:32"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3534600","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Modern smartwatches or wrist wearables having multiple physiological sensing modalities have emerged as a subtle way to detect different mental health conditions, such as anxiety, emotions, and stress. However, affect detection models depending on wrist sensors data often provide poor performance due to inconsistent or inaccurate signals and scarcity of labeled data representing a condition. Although learning representations based on the physiological similarities of the affective tasks offer a possibility to solve this problem, existing approaches fail to effectively generate representations that will work across these multiple tasks. Moreover, the problem becomes more challenging due to the large domain gap among these affective applications and the discrepancies among the multiple sensing modalities. We present M3Sense, a multi-task, multimodal representation learning framework that effectively learns the affect-agnostic physiological representations from limited labeled data and uses a novel domain alignment technique to utilize the unlabeled data from the other affective tasks to accurately detect these mental health conditions using wrist sensors only. We apply M3Sense to 3 mental health applications, and quantify the achieved performance boost compared to the state-of-the-art using extensive evaluations and ablation studies on publicly available and collected datasets. Moreover, we extensively investigate what combination of tasks and modalities aids in developing a robust Multitask Learning model for affect recognition. Our analysis shows that incorporating emotion detection in the learning models degrades the performance of anxiety and stress detection, whereas stress detection helps to boost the emotion detection performance. Our results also show that M3Sense provides consistent performance across all affective tasks and available modalities and also improves the performance of representation learning models on unseen affective tasks by 5% − 60%.
M3Sense:使用多模态可穿戴传感器的影响不可知多任务表示学习
具有多种生理传感模式的现代智能手表或手腕可穿戴设备已经成为一种检测不同心理健康状况(如焦虑、情绪和压力)的微妙方式。然而,依赖于手腕传感器数据的影响检测模型往往由于信号不一致或不准确以及表示条件的标记数据稀缺而性能不佳。尽管基于情感任务的生理相似性学习表征提供了解决这一问题的可能性,但现有的方法无法有效地生成跨这些多任务工作的表征。此外,由于这些情感应用之间存在较大的领域差距,并且多种感知方式之间存在差异,因此问题变得更加具有挑战性。我们提出了M3Sense,这是一个多任务、多模态表征学习框架,它可以有效地从有限的标记数据中学习情感不可知的生理表征,并使用一种新颖的域校准技术来利用来自其他情感任务的未标记数据来准确地检测这些心理健康状况,仅使用手腕传感器。我们将M3Sense应用于3种心理健康应用,并通过对公开可用和收集的数据集进行广泛的评估和消融研究,量化与最先进的技术相比,实现的性能提升。此外,我们广泛地研究了任务和模式的组合有助于开发一个强大的多任务学习模型来进行情感识别。我们的分析表明,在学习模型中加入情绪检测会降低焦虑和压力检测的性能,而压力检测有助于提高情绪检测的性能。我们的研究结果还表明,M3Sense在所有情感任务和可用模式中提供一致的性能,并且还将表示学习模型在未见过的情感任务上的性能提高了5% - 60%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信