Towards Explainability in mHealth Application for Mitigation of Forward Head Posture in Smartphone Users

Richard O. Oyeleke, Babafemi G. Sorinolu
{"title":"Towards Explainability in mHealth Application for Mitigation of Forward Head Posture in Smartphone Users","authors":"Richard O. Oyeleke, Babafemi G. Sorinolu","doi":"10.1109/HealthCom54947.2022.9982740","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) algorithms have recorded tremendous successes in many areas, notably healthcare. With increasing computing power of mobile devices, mobile health (mHealth) applications are embedded with ML models to learn users behavior and influence positive lifestyle changes. Although ML algorithms have shown impressive predictive power over the years, nonetheless, it is necessary that their inferences and recommendations are also explainable. Explainability can promote users’ trust, particularly when ML algorithms are deployed in high-stake domains such as healthcare. In this study, first, we present our proposed situation-aware mobile application called Smarttens coach app that we developed to assist smartphone users in mitigating forward head posture. It embeds an efficientNet CNN model to predict forward head posture in smartphone users by analyzing head posture images of the users. Our Smarttens coach app achieved a state-of-the-art accuracy score of 0.99. However, accuracy score alone does not tell users the whole story about how Smarttens coach app draws its inference on predicted posture binary class. This lack of explanation to justify the predicted posture class label could negatively impact users’ trust in the efficacy of the app. Therefore, we further validated our Smarttens coach app posture prediction efficacy by leveraging an explainable AI (XAI) framework called LIME to generate visual explanations for users’ predicted head posture class label.","PeriodicalId":202664,"journal":{"name":"2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HealthCom54947.2022.9982740","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning (ML) algorithms have recorded tremendous successes in many areas, notably healthcare. With increasing computing power of mobile devices, mobile health (mHealth) applications are embedded with ML models to learn users behavior and influence positive lifestyle changes. Although ML algorithms have shown impressive predictive power over the years, nonetheless, it is necessary that their inferences and recommendations are also explainable. Explainability can promote users’ trust, particularly when ML algorithms are deployed in high-stake domains such as healthcare. In this study, first, we present our proposed situation-aware mobile application called Smarttens coach app that we developed to assist smartphone users in mitigating forward head posture. It embeds an efficientNet CNN model to predict forward head posture in smartphone users by analyzing head posture images of the users. Our Smarttens coach app achieved a state-of-the-art accuracy score of 0.99. However, accuracy score alone does not tell users the whole story about how Smarttens coach app draws its inference on predicted posture binary class. This lack of explanation to justify the predicted posture class label could negatively impact users’ trust in the efficacy of the app. Therefore, we further validated our Smarttens coach app posture prediction efficacy by leveraging an explainable AI (XAI) framework called LIME to generate visual explanations for users’ predicted head posture class label.
在移动健康应用中缓解智能手机用户头部前倾姿势的可解释性
机器学习(ML)算法在许多领域取得了巨大的成功,尤其是医疗保健领域。随着移动设备计算能力的提高,移动健康(mHealth)应用程序嵌入ML模型,以学习用户行为并影响积极的生活方式改变。尽管ML算法多年来已经显示出令人印象深刻的预测能力,但它们的推断和建议也必须是可解释的。可解释性可以促进用户的信任,特别是当ML算法部署在医疗保健等高风险领域时。在这项研究中,首先,我们提出了我们提出的态势感知移动应用程序,称为Smarttens教练应用程序,我们开发帮助智能手机用户减轻前倾头部姿势。它嵌入了一个高效的CNN模型,通过分析用户的头部姿势图像来预测智能手机用户的前倾头部姿势。我们的Smarttens教练应用程序的准确率达到了0.99分。然而,准确度分数本身并不能告诉用户Smarttens教练应用程序是如何根据预测的姿势二进制类进行推断的。这种缺乏解释来证明预测的姿势类别标签的合理性可能会对用户对应用程序功效的信任产生负面影响。因此,我们进一步验证了我们的Smarttens教练应用程序姿势预测功效,利用一个可解释的AI (XAI)框架,称为LIME,为用户预测的头部姿势类别标签生成可视化解释。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信