Muhammad Fahim;S. M. Ahsan Kazmi;Vishal Sharma;Hyundong Shin;Trung Q. Duong
{"title":"Edge Intelligence: A Deep Distilled Model for Wearables to Enable Proactive Eldercare","authors":"Muhammad Fahim;S. M. Ahsan Kazmi;Vishal Sharma;Hyundong Shin;Trung Q. Duong","doi":"10.1109/TAI.2025.3527400","DOIUrl":null,"url":null,"abstract":"Wearable devices are becoming affordable in our society to provide services from simple fitness tracking to the detection of heartbeat disorders. In the case of elderly populations, these devices have great potential to enable proactive eldercare, which can increase the number of years of independent living. The wearables can capture healthcare data continuously. For meaningful insight, deep learning models are preferable to process this data for robust outcomes. One of the major challenges includes deploying these models on edge devices, such as smartphones and wearables. The bottleneck is a large number of parameters and compute-intensive operations. In this research, we propose a novel knowledge distillation (KD) scheme by introducing a self-revision concept. This scheme effectively reduces model size and transfers knowledge from a deep model to a distilled model by filling learning gaps during the training. To evaluate our distilled model, a publicly available dataset, “growing old together validation (GOTOV)” is utilized, which is based on medical-grade standard wearables to monitor behavioral changes in the elderly. Our proposed model reduces the 0.7 million parameters to 1500, which enables edge intelligence. It achieves a 6% improvement in precision, a 9% increase in recall, and a 9% higher F1-score compared to the shallow model for recognizing elderly behavior.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 7","pages":"1736-1745"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10835164/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Wearable devices are becoming affordable in our society to provide services from simple fitness tracking to the detection of heartbeat disorders. In the case of elderly populations, these devices have great potential to enable proactive eldercare, which can increase the number of years of independent living. The wearables can capture healthcare data continuously. For meaningful insight, deep learning models are preferable to process this data for robust outcomes. One of the major challenges includes deploying these models on edge devices, such as smartphones and wearables. The bottleneck is a large number of parameters and compute-intensive operations. In this research, we propose a novel knowledge distillation (KD) scheme by introducing a self-revision concept. This scheme effectively reduces model size and transfers knowledge from a deep model to a distilled model by filling learning gaps during the training. To evaluate our distilled model, a publicly available dataset, “growing old together validation (GOTOV)” is utilized, which is based on medical-grade standard wearables to monitor behavioral changes in the elderly. Our proposed model reduces the 0.7 million parameters to 1500, which enables edge intelligence. It achieves a 6% improvement in precision, a 9% increase in recall, and a 9% higher F1-score compared to the shallow model for recognizing elderly behavior.