Yinghao Liu , Jian Li , Chao Lian , Xinyue Zhang , Junhui Gong , Yufan Wang , Yuliang Zhao
{"title":"A survey for wearable sensors empowering smart healthcare in the era of large language models","authors":"Yinghao Liu , Jian Li , Chao Lian , Xinyue Zhang , Junhui Gong , Yufan Wang , Yuliang Zhao","doi":"10.1016/j.inffus.2025.103409","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs) have made significant advances in biomedical applications, including medical literature analysis and clinical note summarization. Meanwhile, intelligent wearable sensors have become essential tools for joint motion analysis and disease diagnosis with their high sensitivity, real-time monitoring capabilities, and diverse application scenarios. However, effectively integrating LLMs with wearable sensors to achieve in-depth motion data analysis and intelligent health management remains a major research challenge. Traditional studies have often treated joint motion analysis and disease diagnosis as separate domains. This review provides a comprehensive analysis of wearable sensor classifications, data fusion algorithms, and their representative applications in human posture recognition and disease diagnosis, while further exploring the potential of LLMs in enhancing wearable sensor capabilities. The incorporation of LLMs offers the potential to uncover complex relationships between movement patterns and disease progression, facilitating more accurate health assessments and early interventions. In addressing the challenges associated with multi-source sensor data fusion and real-time processing, LLMs, with their powerful feature extraction and cross-modal learning capabilities, are expected to improve data processing efficiency and enable more intelligent real-time diagnostics and decision support. Additionally, energy consumption and computational load remain critical bottlenecks limiting the long-term deployment of wearable devices. Integrating self-powered sensors presents a promising avenue for enhancing data processing efficiency. This review summarizes key challenges in current technological developments and envisions the future convergence of LLMs and wearable sensors, aiming to drive the advancement of intelligent healthcare and health monitoring.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103409"},"PeriodicalIF":15.5000,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004828","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) have made significant advances in biomedical applications, including medical literature analysis and clinical note summarization. Meanwhile, intelligent wearable sensors have become essential tools for joint motion analysis and disease diagnosis with their high sensitivity, real-time monitoring capabilities, and diverse application scenarios. However, effectively integrating LLMs with wearable sensors to achieve in-depth motion data analysis and intelligent health management remains a major research challenge. Traditional studies have often treated joint motion analysis and disease diagnosis as separate domains. This review provides a comprehensive analysis of wearable sensor classifications, data fusion algorithms, and their representative applications in human posture recognition and disease diagnosis, while further exploring the potential of LLMs in enhancing wearable sensor capabilities. The incorporation of LLMs offers the potential to uncover complex relationships between movement patterns and disease progression, facilitating more accurate health assessments and early interventions. In addressing the challenges associated with multi-source sensor data fusion and real-time processing, LLMs, with their powerful feature extraction and cross-modal learning capabilities, are expected to improve data processing efficiency and enable more intelligent real-time diagnostics and decision support. Additionally, energy consumption and computational load remain critical bottlenecks limiting the long-term deployment of wearable devices. Integrating self-powered sensors presents a promising avenue for enhancing data processing efficiency. This review summarizes key challenges in current technological developments and envisions the future convergence of LLMs and wearable sensors, aiming to drive the advancement of intelligent healthcare and health monitoring.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.