{"title":"在移动健康应用中缓解智能手机用户头部前倾姿势的可解释性","authors":"Richard O. Oyeleke, Babafemi G. Sorinolu","doi":"10.1109/HealthCom54947.2022.9982740","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) algorithms have recorded tremendous successes in many areas, notably healthcare. With increasing computing power of mobile devices, mobile health (mHealth) applications are embedded with ML models to learn users behavior and influence positive lifestyle changes. Although ML algorithms have shown impressive predictive power over the years, nonetheless, it is necessary that their inferences and recommendations are also explainable. Explainability can promote users’ trust, particularly when ML algorithms are deployed in high-stake domains such as healthcare. In this study, first, we present our proposed situation-aware mobile application called Smarttens coach app that we developed to assist smartphone users in mitigating forward head posture. It embeds an efficientNet CNN model to predict forward head posture in smartphone users by analyzing head posture images of the users. Our Smarttens coach app achieved a state-of-the-art accuracy score of 0.99. However, accuracy score alone does not tell users the whole story about how Smarttens coach app draws its inference on predicted posture binary class. This lack of explanation to justify the predicted posture class label could negatively impact users’ trust in the efficacy of the app. Therefore, we further validated our Smarttens coach app posture prediction efficacy by leveraging an explainable AI (XAI) framework called LIME to generate visual explanations for users’ predicted head posture class label.","PeriodicalId":202664,"journal":{"name":"2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Explainability in mHealth Application for Mitigation of Forward Head Posture in Smartphone Users\",\"authors\":\"Richard O. Oyeleke, Babafemi G. Sorinolu\",\"doi\":\"10.1109/HealthCom54947.2022.9982740\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning (ML) algorithms have recorded tremendous successes in many areas, notably healthcare. With increasing computing power of mobile devices, mobile health (mHealth) applications are embedded with ML models to learn users behavior and influence positive lifestyle changes. Although ML algorithms have shown impressive predictive power over the years, nonetheless, it is necessary that their inferences and recommendations are also explainable. Explainability can promote users’ trust, particularly when ML algorithms are deployed in high-stake domains such as healthcare. In this study, first, we present our proposed situation-aware mobile application called Smarttens coach app that we developed to assist smartphone users in mitigating forward head posture. It embeds an efficientNet CNN model to predict forward head posture in smartphone users by analyzing head posture images of the users. Our Smarttens coach app achieved a state-of-the-art accuracy score of 0.99. However, accuracy score alone does not tell users the whole story about how Smarttens coach app draws its inference on predicted posture binary class. This lack of explanation to justify the predicted posture class label could negatively impact users’ trust in the efficacy of the app. Therefore, we further validated our Smarttens coach app posture prediction efficacy by leveraging an explainable AI (XAI) framework called LIME to generate visual explanations for users’ predicted head posture class label.\",\"PeriodicalId\":202664,\"journal\":{\"name\":\"2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HealthCom54947.2022.9982740\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HealthCom54947.2022.9982740","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Explainability in mHealth Application for Mitigation of Forward Head Posture in Smartphone Users
Machine learning (ML) algorithms have recorded tremendous successes in many areas, notably healthcare. With increasing computing power of mobile devices, mobile health (mHealth) applications are embedded with ML models to learn users behavior and influence positive lifestyle changes. Although ML algorithms have shown impressive predictive power over the years, nonetheless, it is necessary that their inferences and recommendations are also explainable. Explainability can promote users’ trust, particularly when ML algorithms are deployed in high-stake domains such as healthcare. In this study, first, we present our proposed situation-aware mobile application called Smarttens coach app that we developed to assist smartphone users in mitigating forward head posture. It embeds an efficientNet CNN model to predict forward head posture in smartphone users by analyzing head posture images of the users. Our Smarttens coach app achieved a state-of-the-art accuracy score of 0.99. However, accuracy score alone does not tell users the whole story about how Smarttens coach app draws its inference on predicted posture binary class. This lack of explanation to justify the predicted posture class label could negatively impact users’ trust in the efficacy of the app. Therefore, we further validated our Smarttens coach app posture prediction efficacy by leveraging an explainable AI (XAI) framework called LIME to generate visual explanations for users’ predicted head posture class label.