MU-MAE: 基于多模态屏蔽自动编码器的单次学习

Rex Liu, Xin Liu
{"title":"MU-MAE: 基于多模态屏蔽自动编码器的单次学习","authors":"Rex Liu, Xin Liu","doi":"arxiv-2408.04243","DOIUrl":null,"url":null,"abstract":"With the exponential growth of multimedia data, leveraging multimodal sensors\npresents a promising approach for improving accuracy in human activity\nrecognition. Nevertheless, accurately identifying these activities using both\nvideo data and wearable sensor data presents challenges due to the\nlabor-intensive data annotation, and reliance on external pretrained models or\nadditional data. To address these challenges, we introduce Multimodal Masked\nAutoencoders-Based One-Shot Learning (Mu-MAE). Mu-MAE integrates a multimodal\nmasked autoencoder with a synchronized masking strategy tailored for wearable\nsensors. This masking strategy compels the networks to capture more meaningful\nspatiotemporal features, which enables effective self-supervised pretraining\nwithout the need for external data. Furthermore, Mu-MAE leverages the\nrepresentation extracted from multimodal masked autoencoders as prior\ninformation input to a cross-attention multimodal fusion layer. This fusion\nlayer emphasizes spatiotemporal features requiring attention across different\nmodalities while highlighting differences from other classes, aiding in the\nclassification of various classes in metric-based one-shot learning.\nComprehensive evaluations on MMAct one-shot classification show that Mu-MAE\noutperforms all the evaluated approaches, achieving up to an 80.17% accuracy\nfor five-way one-shot multimodal classification, without the use of additional\ndata.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"65 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MU-MAE: Multimodal Masked Autoencoders-Based One-Shot Learning\",\"authors\":\"Rex Liu, Xin Liu\",\"doi\":\"arxiv-2408.04243\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the exponential growth of multimedia data, leveraging multimodal sensors\\npresents a promising approach for improving accuracy in human activity\\nrecognition. Nevertheless, accurately identifying these activities using both\\nvideo data and wearable sensor data presents challenges due to the\\nlabor-intensive data annotation, and reliance on external pretrained models or\\nadditional data. To address these challenges, we introduce Multimodal Masked\\nAutoencoders-Based One-Shot Learning (Mu-MAE). Mu-MAE integrates a multimodal\\nmasked autoencoder with a synchronized masking strategy tailored for wearable\\nsensors. This masking strategy compels the networks to capture more meaningful\\nspatiotemporal features, which enables effective self-supervised pretraining\\nwithout the need for external data. Furthermore, Mu-MAE leverages the\\nrepresentation extracted from multimodal masked autoencoders as prior\\ninformation input to a cross-attention multimodal fusion layer. This fusion\\nlayer emphasizes spatiotemporal features requiring attention across different\\nmodalities while highlighting differences from other classes, aiding in the\\nclassification of various classes in metric-based one-shot learning.\\nComprehensive evaluations on MMAct one-shot classification show that Mu-MAE\\noutperforms all the evaluated approaches, achieving up to an 80.17% accuracy\\nfor five-way one-shot multimodal classification, without the use of additional\\ndata.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"65 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.04243\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04243","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着多媒体数据的指数级增长,利用多模态传感器是提高人类活动识别准确性的一种前景广阔的方法。然而,要利用视频数据和可穿戴传感器数据准确识别这些活动却面临着挑战,因为数据标注需要大量人力,而且需要依赖外部预训练模型或额外数据。为了应对这些挑战,我们推出了基于多模态掩码自动编码器的单次学习(Mu-MAE)。Mu-MAE 将多模态掩码自动编码器与专为可穿戴设备传感器定制的同步掩码策略整合在一起。这种掩码策略迫使网络捕捉更多有意义的时空特征,从而在不需要外部数据的情况下进行有效的自我监督预训练。此外,Mu-MAE 还利用从多模态掩码自动编码器中提取的表征作为跨注意力多模态融合层的先验信息输入。对 MMAct 一次分类的综合评估表明,Mu-MAE 优于所有评估方法,在不使用额外数据的情况下,五向一次多模态分类的准确率高达 80.17%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MU-MAE: Multimodal Masked Autoencoders-Based One-Shot Learning
With the exponential growth of multimedia data, leveraging multimodal sensors presents a promising approach for improving accuracy in human activity recognition. Nevertheless, accurately identifying these activities using both video data and wearable sensor data presents challenges due to the labor-intensive data annotation, and reliance on external pretrained models or additional data. To address these challenges, we introduce Multimodal Masked Autoencoders-Based One-Shot Learning (Mu-MAE). Mu-MAE integrates a multimodal masked autoencoder with a synchronized masking strategy tailored for wearable sensors. This masking strategy compels the networks to capture more meaningful spatiotemporal features, which enables effective self-supervised pretraining without the need for external data. Furthermore, Mu-MAE leverages the representation extracted from multimodal masked autoencoders as prior information input to a cross-attention multimodal fusion layer. This fusion layer emphasizes spatiotemporal features requiring attention across different modalities while highlighting differences from other classes, aiding in the classification of various classes in metric-based one-shot learning. Comprehensive evaluations on MMAct one-shot classification show that Mu-MAE outperforms all the evaluated approaches, achieving up to an 80.17% accuracy for five-way one-shot multimodal classification, without the use of additional data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信