Deep Learning for Personal Activity Recognition Under More Complex and Different Placement Positions of Smart Phone

IF 0.7 Q3 COMPUTER SCIENCE, THEORY & METHODS
Bhagya Rekha Sangisetti, Suresh Pabboju
{"title":"Deep Learning for Personal Activity Recognition Under More Complex and Different Placement Positions of Smart Phone","authors":"Bhagya Rekha Sangisetti, Suresh Pabboju","doi":"10.14569/ijacsa.2023.0140639","DOIUrl":null,"url":null,"abstract":"Personal Activity Recognition (PAR) is an indispensable research area as it is widely used in applications such as security, healthcare, gaming, surveillance and remote patient monitoring. With sensors introduced in smart phones, data collection for PAR made easy. However, PAR is non-trivial and difficult task due to bulk of data to be processed, complexity and sensor placement positions. Deep learning is found to be scalable and efficient in processing such data. However, the main problem with existing solutions is that, they could recognize up to 6 or 8 actions only. Besides, they suffer from accurate recognition of other actions and also deal with complexity and different placement positions of smart phone. To address this problem, in this paper, we proposed a framework named Robust Deep Personal Action Recognition Framework (RDPARF) which is based on enhanced Convolutional Neural Network (CNN) model which is trained to recognize 12 actions. RDPARF is realized with our proposed algorithm known as Enhanced CNN for Robust Personal Activity Recognition (ECNN-RPAR). This algorithm has provision for early stopping checkpoint to optimize resource consumption and faster convergence. Experiments are made with MHealth benchmark dataset collected from UCI repository. Our empirical results revealed that ECNN-RPAR could recognize 12 actions under more complex and different placement positions of smart phone besides outperforming the state of the art exhibiting highest accuracy with 96.25%. Keywords—Human activity recognition; deep learning; CNN; MHealth dataset; artificial intelligence","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advanced Computer Science and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14569/ijacsa.2023.0140639","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Personal Activity Recognition (PAR) is an indispensable research area as it is widely used in applications such as security, healthcare, gaming, surveillance and remote patient monitoring. With sensors introduced in smart phones, data collection for PAR made easy. However, PAR is non-trivial and difficult task due to bulk of data to be processed, complexity and sensor placement positions. Deep learning is found to be scalable and efficient in processing such data. However, the main problem with existing solutions is that, they could recognize up to 6 or 8 actions only. Besides, they suffer from accurate recognition of other actions and also deal with complexity and different placement positions of smart phone. To address this problem, in this paper, we proposed a framework named Robust Deep Personal Action Recognition Framework (RDPARF) which is based on enhanced Convolutional Neural Network (CNN) model which is trained to recognize 12 actions. RDPARF is realized with our proposed algorithm known as Enhanced CNN for Robust Personal Activity Recognition (ECNN-RPAR). This algorithm has provision for early stopping checkpoint to optimize resource consumption and faster convergence. Experiments are made with MHealth benchmark dataset collected from UCI repository. Our empirical results revealed that ECNN-RPAR could recognize 12 actions under more complex and different placement positions of smart phone besides outperforming the state of the art exhibiting highest accuracy with 96.25%. Keywords—Human activity recognition; deep learning; CNN; MHealth dataset; artificial intelligence
深度学习在智能手机更复杂、不同放置位置下的个人活动识别
个人活动识别(PAR)在安全、医疗、游戏、监控和远程病人监护等领域有着广泛的应用,是一个不可或缺的研究领域。随着智能手机中引入传感器,PAR的数据收集变得容易。然而,由于需要处理的大量数据、复杂性和传感器的放置位置,PAR是一项艰巨的任务。人们发现深度学习在处理此类数据方面具有可扩展性和效率。然而,现有解决方案的主要问题是,它们最多只能识别6或8个动作。此外,他们还面临着对其他动作的准确识别,还要处理智能手机的复杂性和不同的放置位置。为了解决这一问题,本文提出了一种基于增强卷积神经网络(CNN)模型的鲁棒深度个人动作识别框架(RDPARF),该框架可以训练识别12个动作。RDPARF是用我们提出的增强CNN鲁棒个人活动识别(ECNN-RPAR)算法实现的。该算法设置了提前停止检查点,优化了资源消耗,加快了收敛速度。使用从UCI知识库中收集的移动健康基准数据集进行实验。实证结果表明,ECNN-RPAR可以识别出智能手机在更复杂和不同放置位置下的12种动作,准确率达到96.25%,优于目前的研究水平。关键词:人体活动识别;深度学习;美国有线电视新闻网(CNN);移动医疗数据集;人工智能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.30
自引率
22.20%
发文量
519
期刊介绍: IJACSA is a scholarly computer science journal representing the best in research. Its mission is to provide an outlet for quality research to be publicised and published to a global audience. The journal aims to publish papers selected through rigorous double-blind peer review to ensure originality, timeliness, relevance, and readability. In sync with the Journal''s vision "to be a respected publication that publishes peer reviewed research articles, as well as review and survey papers contributed by International community of Authors", we have drawn reviewers and editors from Institutions and Universities across the globe. A double blind peer review process is conducted to ensure that we retain high standards. At IJACSA, we stand strong because we know that global challenges make way for new innovations, new ways and new talent. International Journal of Advanced Computer Science and Applications publishes carefully refereed research, review and survey papers which offer a significant contribution to the computer science literature, and which are of interest to a wide audience. Coverage extends to all main-stream branches of computer science and related applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信