{"title":"PDC-HAR:利用双通道卷积神经网络通过多传感器可穿戴网络识别人类活动","authors":"Yvxuan Ren, Dandan Zhu, Kai Tong, Lulu Xv, Zhengtai Wang, Lixin Kang, Jinguo Chai","doi":"10.1016/j.pmcj.2023.101868","DOIUrl":null,"url":null,"abstract":"<div><p>Realizing human activity recognition is an important issue in pedestrian navigation and intelligent prosthetic control. Utilizing miniature multi-sensor wearable networks is a reliable method to improve the efficiency and convenience of the recognition system. Effective feature extraction and fusion of multimodal signals is a key issue in recognition. Therefore, this paper proposes an enhanced algorithm based on PCA sensor coupling analysis for data preprocessing. Subsequently, an innovative two-channel convolutional neural network with an SPF feature fusion layer as the core is built. The network fully analyzes the local and global features of multimodal signals using the local contrast and luminance properties of feature images. Compared with traditional methods, the model can reduce the data dimensionality and automatically identify and fuse the key information of the signals. In addition, most of the current mode recognition only supports simple actions such as walking and running, this paper constructs a database containing sixteen states by building a network with inertial sensors (IMU), curvature sensors (FLEX) and electromyography sensors (EMG). The experimental results show that the proposed system exhibits better results in complex action recognition and provides a new scheme for the realization of feature fusion and enhancement.</p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"97 ","pages":"Article 101868"},"PeriodicalIF":3.0000,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PDCHAR: Human activity recognition via multi-sensor wearable networks using two-channel convolutional neural networks\",\"authors\":\"Yvxuan Ren, Dandan Zhu, Kai Tong, Lulu Xv, Zhengtai Wang, Lixin Kang, Jinguo Chai\",\"doi\":\"10.1016/j.pmcj.2023.101868\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Realizing human activity recognition is an important issue in pedestrian navigation and intelligent prosthetic control. Utilizing miniature multi-sensor wearable networks is a reliable method to improve the efficiency and convenience of the recognition system. Effective feature extraction and fusion of multimodal signals is a key issue in recognition. Therefore, this paper proposes an enhanced algorithm based on PCA sensor coupling analysis for data preprocessing. Subsequently, an innovative two-channel convolutional neural network with an SPF feature fusion layer as the core is built. The network fully analyzes the local and global features of multimodal signals using the local contrast and luminance properties of feature images. Compared with traditional methods, the model can reduce the data dimensionality and automatically identify and fuse the key information of the signals. In addition, most of the current mode recognition only supports simple actions such as walking and running, this paper constructs a database containing sixteen states by building a network with inertial sensors (IMU), curvature sensors (FLEX) and electromyography sensors (EMG). The experimental results show that the proposed system exhibits better results in complex action recognition and provides a new scheme for the realization of feature fusion and enhancement.</p></div>\",\"PeriodicalId\":49005,\"journal\":{\"name\":\"Pervasive and Mobile Computing\",\"volume\":\"97 \",\"pages\":\"Article 101868\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2023-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pervasive and Mobile Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1574119223001268\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pervasive and Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1574119223001268","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
PDCHAR: Human activity recognition via multi-sensor wearable networks using two-channel convolutional neural networks
Realizing human activity recognition is an important issue in pedestrian navigation and intelligent prosthetic control. Utilizing miniature multi-sensor wearable networks is a reliable method to improve the efficiency and convenience of the recognition system. Effective feature extraction and fusion of multimodal signals is a key issue in recognition. Therefore, this paper proposes an enhanced algorithm based on PCA sensor coupling analysis for data preprocessing. Subsequently, an innovative two-channel convolutional neural network with an SPF feature fusion layer as the core is built. The network fully analyzes the local and global features of multimodal signals using the local contrast and luminance properties of feature images. Compared with traditional methods, the model can reduce the data dimensionality and automatically identify and fuse the key information of the signals. In addition, most of the current mode recognition only supports simple actions such as walking and running, this paper constructs a database containing sixteen states by building a network with inertial sensors (IMU), curvature sensors (FLEX) and electromyography sensors (EMG). The experimental results show that the proposed system exhibits better results in complex action recognition and provides a new scheme for the realization of feature fusion and enhancement.
期刊介绍:
As envisioned by Mark Weiser as early as 1991, pervasive computing systems and services have truly become integral parts of our daily lives. Tremendous developments in a multitude of technologies ranging from personalized and embedded smart devices (e.g., smartphones, sensors, wearables, IoTs, etc.) to ubiquitous connectivity, via a variety of wireless mobile communications and cognitive networking infrastructures, to advanced computing techniques (including edge, fog and cloud) and user-friendly middleware services and platforms have significantly contributed to the unprecedented advances in pervasive and mobile computing. Cutting-edge applications and paradigms have evolved, such as cyber-physical systems and smart environments (e.g., smart city, smart energy, smart transportation, smart healthcare, etc.) that also involve human in the loop through social interactions and participatory and/or mobile crowd sensing, for example. The goal of pervasive computing systems is to improve human experience and quality of life, without explicit awareness of the underlying communications and computing technologies.
The Pervasive and Mobile Computing Journal (PMC) is a high-impact, peer-reviewed technical journal that publishes high-quality scientific articles spanning theory and practice, and covering all aspects of pervasive and mobile computing and systems.