SMART:针对精神障碍群体的场景-动作感知人类行为识别框架

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Zengyuan Lai;Jiarui Yang;Songpengcheng Xia;Qi Wu;Zhen Sun;Wenxian Yu;Ling Pei
{"title":"SMART:针对精神障碍群体的场景-动作感知人类行为识别框架","authors":"Zengyuan Lai;Jiarui Yang;Songpengcheng Xia;Qi Wu;Zhen Sun;Wenxian Yu;Ling Pei","doi":"10.1109/JIOT.2024.3509458","DOIUrl":null,"url":null,"abstract":"Patients with mental disorders often exhibit risky abnormal actions, such as climbing walls or hitting windows, necessitating intelligent video behavior monitoring for smart healthcare with the rising Internet of Things (IoT) technology. However, the development of vision-based human action recognition (HAR) for these actions is hindered by the lack of specialized algorithms and datasets. In this article, we innovatively propose to build a vision-based HAR dataset, including abnormal actions often occurring in the mental disorder group and then introduce a novel scene-motion-aware action recognition technology framework, named SMART, consisting of two technical modules. First, we propose a scene perception module to extract human motion trajectory and human-scene interaction features, which introduces additional scene information for a supplementary semantic representation of the above actions. Second, the multistage fusion module fuses the skeleton motion, motion trajectory, and human-scene interaction features, enhancing the semantic association between the skeleton motion and the above supplementary representation, thus generating a comprehensive representation with both human motion and scene information. The effectiveness of our proposed method has been validated on our self-collected HAR dataset (MentalHAD), achieving 94.9% and 93.1% accuracy in un-seen subjects and scenes and outperforming state-of-the-art approaches by 6.5% and 13.2%, respectively. The demonstrated subject- and scene- generalizability makes it possible for SMART’s migration to practical deployment in smart healthcare systems for mental disorder patients in medical settings. The code and dataset will be released publicly for further research: <uri>https://github.com/Inowlzy/SMART.git</uri>.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 8","pages":"10099-10113"},"PeriodicalIF":8.9000,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SMART: Scene-Motion-Aware Human Action Recognition Framework for Mental Disorder Group\",\"authors\":\"Zengyuan Lai;Jiarui Yang;Songpengcheng Xia;Qi Wu;Zhen Sun;Wenxian Yu;Ling Pei\",\"doi\":\"10.1109/JIOT.2024.3509458\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Patients with mental disorders often exhibit risky abnormal actions, such as climbing walls or hitting windows, necessitating intelligent video behavior monitoring for smart healthcare with the rising Internet of Things (IoT) technology. However, the development of vision-based human action recognition (HAR) for these actions is hindered by the lack of specialized algorithms and datasets. In this article, we innovatively propose to build a vision-based HAR dataset, including abnormal actions often occurring in the mental disorder group and then introduce a novel scene-motion-aware action recognition technology framework, named SMART, consisting of two technical modules. First, we propose a scene perception module to extract human motion trajectory and human-scene interaction features, which introduces additional scene information for a supplementary semantic representation of the above actions. Second, the multistage fusion module fuses the skeleton motion, motion trajectory, and human-scene interaction features, enhancing the semantic association between the skeleton motion and the above supplementary representation, thus generating a comprehensive representation with both human motion and scene information. The effectiveness of our proposed method has been validated on our self-collected HAR dataset (MentalHAD), achieving 94.9% and 93.1% accuracy in un-seen subjects and scenes and outperforming state-of-the-art approaches by 6.5% and 13.2%, respectively. The demonstrated subject- and scene- generalizability makes it possible for SMART’s migration to practical deployment in smart healthcare systems for mental disorder patients in medical settings. The code and dataset will be released publicly for further research: <uri>https://github.com/Inowlzy/SMART.git</uri>.\",\"PeriodicalId\":54347,\"journal\":{\"name\":\"IEEE Internet of Things Journal\",\"volume\":\"12 8\",\"pages\":\"10099-10113\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-12-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Internet of Things Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10771963/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10771963/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

精神障碍患者经常会出现爬墙、砸窗户等危险的异常行为,随着物联网技术的发展,智能视频行为监控成为智能医疗的必要条件。然而,由于缺乏专门的算法和数据集,基于视觉的人体动作识别(HAR)的发展受到阻碍。在本文中,我们创新性地提出了一个基于视觉的HAR数据集,包括精神障碍群体中经常发生的异常动作,并在此基础上引入了一种新的场景-动作感知动作识别技术框架SMART,该框架由两个技术模块组成。首先,我们提出了一个场景感知模块来提取人体运动轨迹和人-场景交互特征,该模块引入了额外的场景信息来补充上述动作的语义表示。其次,多级融合模块融合了骨骼运动、运动轨迹和人-场景交互特征,增强了骨骼运动与上述补充表示之间的语义关联,从而生成了包含人体运动和场景信息的综合表示。我们提出的方法的有效性已经在我们自己收集的HAR数据集(MentalHAD)上得到了验证,在未见过的主题和场景中实现了94.9%和93.1%的准确率,比最先进的方法分别高出6.5%和13.2%。所展示的主题和场景通用性使SMART在医疗环境中为精神障碍患者的智能医疗系统的实际部署迁移成为可能。代码和数据集将公开发布以供进一步研究:https://github.com/Inowlzy/SMART.git。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SMART: Scene-Motion-Aware Human Action Recognition Framework for Mental Disorder Group
Patients with mental disorders often exhibit risky abnormal actions, such as climbing walls or hitting windows, necessitating intelligent video behavior monitoring for smart healthcare with the rising Internet of Things (IoT) technology. However, the development of vision-based human action recognition (HAR) for these actions is hindered by the lack of specialized algorithms and datasets. In this article, we innovatively propose to build a vision-based HAR dataset, including abnormal actions often occurring in the mental disorder group and then introduce a novel scene-motion-aware action recognition technology framework, named SMART, consisting of two technical modules. First, we propose a scene perception module to extract human motion trajectory and human-scene interaction features, which introduces additional scene information for a supplementary semantic representation of the above actions. Second, the multistage fusion module fuses the skeleton motion, motion trajectory, and human-scene interaction features, enhancing the semantic association between the skeleton motion and the above supplementary representation, thus generating a comprehensive representation with both human motion and scene information. The effectiveness of our proposed method has been validated on our self-collected HAR dataset (MentalHAD), achieving 94.9% and 93.1% accuracy in un-seen subjects and scenes and outperforming state-of-the-art approaches by 6.5% and 13.2%, respectively. The demonstrated subject- and scene- generalizability makes it possible for SMART’s migration to practical deployment in smart healthcare systems for mental disorder patients in medical settings. The code and dataset will be released publicly for further research: https://github.com/Inowlzy/SMART.git.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Internet of Things Journal
IEEE Internet of Things Journal Computer Science-Information Systems
CiteScore
17.60
自引率
13.20%
发文量
1982
期刊介绍: The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信