Fluid Intake Action Detection Based on Egocentric Videos and YOLOv8 Models.

IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xin Chen, Xinqi Bao, Ernest Kamavuako
{"title":"Fluid Intake Action Detection Based on Egocentric Videos and YOLOv8 Models.","authors":"Xin Chen, Xinqi Bao, Ernest Kamavuako","doi":"10.1109/JBHI.2025.3548512","DOIUrl":null,"url":null,"abstract":"<p><p>Dehydration in older adults poses significant health risks, requiring effective monitoring solutions. This study addresses the challenge of detecting fluid intake accurately using a first-person, vision-based approach with wearable cameras and advanced object detection models. We developed a comprehensive dataset comprising 17 hours of drinking footage (∼3100 events) and 15 hours of nondrinking activities (∼3600 events) recorded as interference, from 36 participants, collected between October 2022 and January 2023 at King's College London. We include various container types and daily activities to enhance the model's robustness and generalizability. YOLOv8 models were used to detect drinking-related objects, and a mechanism was developed to analyse the size and position of the detection output to identify hand-container interactions and movements. The models achieved mAP@50 over 0.97 and F1-score over 0.95 in detecting drinking-related objects. Action detection testing results from video streams demonstrated an F1-score of 0.917, which dropped to 0.863 when interference activities were added. Additionally, the model detected the start of drinking activities with an average latency of 0.24 seconds and the end with 0.04 seconds, indicating high temporal accuracy. These results demonstrate the feasibility of egocentric, vision-based fluidintake detection and its potential application in preventing dehydration. To our knowledge, this is the first vision-based dataset focusing on fluid-intake actions from a first-person viewpoint-offering a novel foundation for advancing hydration monitoring in older adults and various real-world contexts.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2025.3548512","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Dehydration in older adults poses significant health risks, requiring effective monitoring solutions. This study addresses the challenge of detecting fluid intake accurately using a first-person, vision-based approach with wearable cameras and advanced object detection models. We developed a comprehensive dataset comprising 17 hours of drinking footage (∼3100 events) and 15 hours of nondrinking activities (∼3600 events) recorded as interference, from 36 participants, collected between October 2022 and January 2023 at King's College London. We include various container types and daily activities to enhance the model's robustness and generalizability. YOLOv8 models were used to detect drinking-related objects, and a mechanism was developed to analyse the size and position of the detection output to identify hand-container interactions and movements. The models achieved mAP@50 over 0.97 and F1-score over 0.95 in detecting drinking-related objects. Action detection testing results from video streams demonstrated an F1-score of 0.917, which dropped to 0.863 when interference activities were added. Additionally, the model detected the start of drinking activities with an average latency of 0.24 seconds and the end with 0.04 seconds, indicating high temporal accuracy. These results demonstrate the feasibility of egocentric, vision-based fluidintake detection and its potential application in preventing dehydration. To our knowledge, this is the first vision-based dataset focusing on fluid-intake actions from a first-person viewpoint-offering a novel foundation for advancing hydration monitoring in older adults and various real-world contexts.

求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信