{"title":"Deep Learning for Personal Activity Recognition Under More Complex and Different Placement Positions of Smart Phone","authors":"Bhagya Rekha Sangisetti, Suresh Pabboju","doi":"10.14569/ijacsa.2023.0140639","DOIUrl":null,"url":null,"abstract":"Personal Activity Recognition (PAR) is an indispensable research area as it is widely used in applications such as security, healthcare, gaming, surveillance and remote patient monitoring. With sensors introduced in smart phones, data collection for PAR made easy. However, PAR is non-trivial and difficult task due to bulk of data to be processed, complexity and sensor placement positions. Deep learning is found to be scalable and efficient in processing such data. However, the main problem with existing solutions is that, they could recognize up to 6 or 8 actions only. Besides, they suffer from accurate recognition of other actions and also deal with complexity and different placement positions of smart phone. To address this problem, in this paper, we proposed a framework named Robust Deep Personal Action Recognition Framework (RDPARF) which is based on enhanced Convolutional Neural Network (CNN) model which is trained to recognize 12 actions. RDPARF is realized with our proposed algorithm known as Enhanced CNN for Robust Personal Activity Recognition (ECNN-RPAR). This algorithm has provision for early stopping checkpoint to optimize resource consumption and faster convergence. Experiments are made with MHealth benchmark dataset collected from UCI repository. Our empirical results revealed that ECNN-RPAR could recognize 12 actions under more complex and different placement positions of smart phone besides outperforming the state of the art exhibiting highest accuracy with 96.25%. Keywords—Human activity recognition; deep learning; CNN; MHealth dataset; artificial intelligence","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advanced Computer Science and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14569/ijacsa.2023.0140639","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Personal Activity Recognition (PAR) is an indispensable research area as it is widely used in applications such as security, healthcare, gaming, surveillance and remote patient monitoring. With sensors introduced in smart phones, data collection for PAR made easy. However, PAR is non-trivial and difficult task due to bulk of data to be processed, complexity and sensor placement positions. Deep learning is found to be scalable and efficient in processing such data. However, the main problem with existing solutions is that, they could recognize up to 6 or 8 actions only. Besides, they suffer from accurate recognition of other actions and also deal with complexity and different placement positions of smart phone. To address this problem, in this paper, we proposed a framework named Robust Deep Personal Action Recognition Framework (RDPARF) which is based on enhanced Convolutional Neural Network (CNN) model which is trained to recognize 12 actions. RDPARF is realized with our proposed algorithm known as Enhanced CNN for Robust Personal Activity Recognition (ECNN-RPAR). This algorithm has provision for early stopping checkpoint to optimize resource consumption and faster convergence. Experiments are made with MHealth benchmark dataset collected from UCI repository. Our empirical results revealed that ECNN-RPAR could recognize 12 actions under more complex and different placement positions of smart phone besides outperforming the state of the art exhibiting highest accuracy with 96.25%. Keywords—Human activity recognition; deep learning; CNN; MHealth dataset; artificial intelligence
期刊介绍:
IJACSA is a scholarly computer science journal representing the best in research. Its mission is to provide an outlet for quality research to be publicised and published to a global audience. The journal aims to publish papers selected through rigorous double-blind peer review to ensure originality, timeliness, relevance, and readability. In sync with the Journal''s vision "to be a respected publication that publishes peer reviewed research articles, as well as review and survey papers contributed by International community of Authors", we have drawn reviewers and editors from Institutions and Universities across the globe. A double blind peer review process is conducted to ensure that we retain high standards. At IJACSA, we stand strong because we know that global challenges make way for new innovations, new ways and new talent. International Journal of Advanced Computer Science and Applications publishes carefully refereed research, review and survey papers which offer a significant contribution to the computer science literature, and which are of interest to a wide audience. Coverage extends to all main-stream branches of computer science and related applications