Philipp Zagar , Vishnu Ravi , Lauren Aalami , Stephan Krusche , Oliver Aalami , Paul Schmiedmayer
{"title":"Dynamic fog computing for enhanced LLM execution in medical applications","authors":"Philipp Zagar , Vishnu Ravi , Lauren Aalami , Stephan Krusche , Oliver Aalami , Paul Schmiedmayer","doi":"10.1016/j.smhl.2025.100577","DOIUrl":null,"url":null,"abstract":"<div><div>The ability of large language models (LLMs) to process, interpret, and comprehend vast amounts of heterogeneous data presents a significant opportunity to enhance data-driven care delivery. However, the sensitive nature of protected health information (PHI) raises concerns about data privacy and trust in remote LLM platforms. Additionally, the cost of cloud-based artificial intelligence (AI) services remains a barrier to widespread adoption. To address these challenges, we propose shifting the LLM execution environment from centralized, opaque cloud providers to a decentralized and dynamic fog computing architecture. By running open-weight LLMs in more trusted environments, such as a user’s edge device or a fog layer within a local network, we aim to mitigate the privacy, trust, and financial concerns associated with cloud-based LLMs. We introduce <em>SpeziLLM</em>, an open-source framework designed to streamline LLM execution across multiple layers, facilitating seamless integration into digital health applications. To demonstrate its versatility, we showcase <em>SpeziLLM</em> across six digital health applications, highlighting its broad applicability in various healthcare settings.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100577"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Health","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352648325000388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Health Professions","Score":null,"Total":0}
引用次数: 0
Abstract
The ability of large language models (LLMs) to process, interpret, and comprehend vast amounts of heterogeneous data presents a significant opportunity to enhance data-driven care delivery. However, the sensitive nature of protected health information (PHI) raises concerns about data privacy and trust in remote LLM platforms. Additionally, the cost of cloud-based artificial intelligence (AI) services remains a barrier to widespread adoption. To address these challenges, we propose shifting the LLM execution environment from centralized, opaque cloud providers to a decentralized and dynamic fog computing architecture. By running open-weight LLMs in more trusted environments, such as a user’s edge device or a fog layer within a local network, we aim to mitigate the privacy, trust, and financial concerns associated with cloud-based LLMs. We introduce SpeziLLM, an open-source framework designed to streamline LLM execution across multiple layers, facilitating seamless integration into digital health applications. To demonstrate its versatility, we showcase SpeziLLM across six digital health applications, highlighting its broad applicability in various healthcare settings.