Controlled and Real-Life Investigation of Optical Tracking Sensors in Smart Glasses for Monitoring Eating Behavior Using Deep Learning: Cross-Sectional Study.

IF 5.4 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES
Simon Stankoski, Ivana Kiprijanovska, Martin Gjoreski, Filip Panchevski, Borjan Sazdov, Bojan Sofronievski, Andrew Cleal, Mohsen Fatoorechi, Charles Nduka, Hristijan Gjoreski
{"title":"Controlled and Real-Life Investigation of Optical Tracking Sensors in Smart Glasses for Monitoring Eating Behavior Using Deep Learning: Cross-Sectional Study.","authors":"Simon Stankoski, Ivana Kiprijanovska, Martin Gjoreski, Filip Panchevski, Borjan Sazdov, Bojan Sofronievski, Andrew Cleal, Mohsen Fatoorechi, Charles Nduka, Hristijan Gjoreski","doi":"10.2196/59469","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The increasing prevalence of obesity necessitates innovative approaches to better understand this health crisis, particularly given its strong connection to chronic diseases such as diabetes, cancer, and cardiovascular conditions. Monitoring dietary behavior is crucial for designing effective interventions that help decrease obesity prevalence and promote healthy lifestyles. However, traditional dietary tracking methods are limited by participant burden and recall bias. Exploring microlevel eating activities, such as meal duration and chewing frequency, in addition to eating episodes, is crucial due to their substantial relation to obesity and disease risk.</p><p><strong>Objective: </strong>The primary objective of the study was to develop an accurate and noninvasive system for automatically monitoring eating and chewing activities using sensor-equipped smart glasses. The system distinguishes chewing from other facial activities, such as speaking and teeth clenching. The secondary objective was to evaluate the system's performance on unseen test users using a combination of laboratory-controlled and real-life user studies. Unlike state-of-the-art studies that focus on detecting full eating episodes, our approach provides a more granular analysis by specifically detecting chewing segments within each eating episode.</p><p><strong>Methods: </strong>The study uses OCO optical sensors embedded in smart glasses to monitor facial muscle activations related to eating and chewing activities. The sensors measure relative movements on the skin's surface in 2 dimensions (X and Y). Data from these sensors are analyzed using deep learning (DL) to distinguish chewing from other facial activities. To address the temporal dependence between chewing events in real life, we integrate a hidden Markov model as an additional component that analyzes the output from the DL model.</p><p><strong>Results: </strong>Statistical tests of mean sensor activations revealed statistically significant differences across all 6 comparison pairs (P<.001) involving 2 sensors (cheeks and temple) and 3 facial activities (eating, clenching, and speaking). These results demonstrate the sensitivity of the sensor data. Furthermore, the convolutional long short-term memory model, which is a combination of convolutional and long short-term memory neural networks, emerged as the best-performing DL model for chewing detection. In controlled laboratory settings, the model achieved an F<sub>1</sub>-score of 0.91, demonstrating robust performance. In real-life scenarios, the system demonstrated high precision (0.95) and recall (0.82) for detecting eating segments. The chewing rates and the number of chews evaluated in the real-life study showed consistency with expected real-life eating behaviors.</p><p><strong>Conclusions: </strong>The study represents a substantial advancement in dietary monitoring and health technology. By providing a reliable and noninvasive method for tracking eating behavior, it has the potential to revolutionize how dietary data are collected and used. This could lead to more effective health interventions and a better understanding of the factors influencing eating habits and their health implications.</p>","PeriodicalId":14756,"journal":{"name":"JMIR mHealth and uHealth","volume":"12 ","pages":"e59469"},"PeriodicalIF":5.4000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11467608/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR mHealth and uHealth","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/59469","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The increasing prevalence of obesity necessitates innovative approaches to better understand this health crisis, particularly given its strong connection to chronic diseases such as diabetes, cancer, and cardiovascular conditions. Monitoring dietary behavior is crucial for designing effective interventions that help decrease obesity prevalence and promote healthy lifestyles. However, traditional dietary tracking methods are limited by participant burden and recall bias. Exploring microlevel eating activities, such as meal duration and chewing frequency, in addition to eating episodes, is crucial due to their substantial relation to obesity and disease risk.

Objective: The primary objective of the study was to develop an accurate and noninvasive system for automatically monitoring eating and chewing activities using sensor-equipped smart glasses. The system distinguishes chewing from other facial activities, such as speaking and teeth clenching. The secondary objective was to evaluate the system's performance on unseen test users using a combination of laboratory-controlled and real-life user studies. Unlike state-of-the-art studies that focus on detecting full eating episodes, our approach provides a more granular analysis by specifically detecting chewing segments within each eating episode.

Methods: The study uses OCO optical sensors embedded in smart glasses to monitor facial muscle activations related to eating and chewing activities. The sensors measure relative movements on the skin's surface in 2 dimensions (X and Y). Data from these sensors are analyzed using deep learning (DL) to distinguish chewing from other facial activities. To address the temporal dependence between chewing events in real life, we integrate a hidden Markov model as an additional component that analyzes the output from the DL model.

Results: Statistical tests of mean sensor activations revealed statistically significant differences across all 6 comparison pairs (P<.001) involving 2 sensors (cheeks and temple) and 3 facial activities (eating, clenching, and speaking). These results demonstrate the sensitivity of the sensor data. Furthermore, the convolutional long short-term memory model, which is a combination of convolutional and long short-term memory neural networks, emerged as the best-performing DL model for chewing detection. In controlled laboratory settings, the model achieved an F1-score of 0.91, demonstrating robust performance. In real-life scenarios, the system demonstrated high precision (0.95) and recall (0.82) for detecting eating segments. The chewing rates and the number of chews evaluated in the real-life study showed consistency with expected real-life eating behaviors.

Conclusions: The study represents a substantial advancement in dietary monitoring and health technology. By providing a reliable and noninvasive method for tracking eating behavior, it has the potential to revolutionize how dietary data are collected and used. This could lead to more effective health interventions and a better understanding of the factors influencing eating habits and their health implications.

利用深度学习对智能眼镜中用于监测进食行为的光学跟踪传感器进行受控和实际调查:横断面研究。
背景:肥胖症的发病率越来越高,有必要采用创新方法来更好地了解这一健康危机,特别是考虑到肥胖症与糖尿病、癌症和心血管疾病等慢性疾病的密切联系。监测饮食行为对于设计有效的干预措施,帮助降低肥胖患病率和促进健康的生活方式至关重要。然而,传统的饮食跟踪方法受到参与者负担和回忆偏差的限制。除了进食事件外,探索微观层面的进食活动(如进餐时间和咀嚼频率)也至关重要,因为它们与肥胖和疾病风险有很大关系:本研究的主要目的是开发一种精确的无创系统,利用配备传感器的智能眼镜自动监测进食和咀嚼活动。该系统可将咀嚼与其他面部活动(如说话和牙齿紧咬)区分开来。次要目标是通过实验室对照研究和真实用户研究相结合的方法,评估该系统对未见过的测试用户的性能。与专注于检测完整进食情节的先进研究不同,我们的方法通过专门检测每个进食情节中的咀嚼片段,提供了更精细的分析:本研究使用嵌入在智能眼镜中的 OCO 光学传感器来监测与进食和咀嚼活动相关的面部肌肉活动。传感器从两个维度(X 和 Y)测量皮肤表面的相对运动。利用深度学习(DL)对这些传感器的数据进行分析,以区分咀嚼和其他面部活动。为了解决现实生活中咀嚼事件之间的时间依赖性问题,我们集成了一个隐马尔可夫模型,作为分析深度学习模型输出的附加组件:结果:对传感器平均激活度的统计测试表明,在所有 6 对对比中,传感器平均激活度的差异都具有统计学意义(P1-score 为 0.91,显示了强大的性能。在真实场景中,该系统检测进食片段的精确度(0.95)和召回率(0.82)都很高。实际研究中评估的咀嚼率和咀嚼次数与预期的实际进食行为一致:这项研究代表了饮食监测和健康技术的重大进步。通过提供一种可靠的非侵入性方法来跟踪饮食行为,它有可能彻底改变饮食数据的收集和使用方式。这将带来更有效的健康干预措施,并让人们更好地了解影响饮食习惯的因素及其对健康的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR mHealth and uHealth
JMIR mHealth and uHealth Medicine-Health Informatics
CiteScore
12.60
自引率
4.00%
发文量
159
审稿时长
10 weeks
期刊介绍: JMIR mHealth and uHealth (JMU, ISSN 2291-5222) is a spin-off journal of JMIR, the leading eHealth journal (Impact Factor 2016: 5.175). JMIR mHealth and uHealth is indexed in PubMed, PubMed Central, and Science Citation Index Expanded (SCIE), and in June 2017 received a stunning inaugural Impact Factor of 4.636. The journal focusses on health and biomedical applications in mobile and tablet computing, pervasive and ubiquitous computing, wearable computing and domotics. JMIR mHealth and uHealth publishes since 2013 and was the first mhealth journal in Pubmed. It publishes even faster and has a broader scope with including papers which are more technical or more formative/developmental than what would be published in the Journal of Medical Internet Research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信