Multilingual Virtual Healthcare Assistant

IF 3.3
Geetika Munjal, Piyush Agarwal, Lakshay Goyal, Nandy Samiran
{"title":"Multilingual Virtual Healthcare Assistant","authors":"Geetika Munjal,&nbsp;Piyush Agarwal,&nbsp;Lakshay Goyal,&nbsp;Nandy Samiran","doi":"10.1002/hcs2.70031","DOIUrl":null,"url":null,"abstract":"<p>This study proposes a virtual healthcare assistant framework designed to provide support in multiple languages for efficient and accurate healthcare assistance. The system employs a transformer model to process sophisticated, multilingual user inputs and gain improved contextual understanding compared to conventional models, including long short-term memory (LSTM) models. In contrast to LSTMs, which sequence processes information and may experience challenges with long-range dependencies, transformers utilize self-attention to learn relationships among every aspect of the input in parallel. This enables them to execute more accurately in various languages and contexts, making them well-suited for applications such as translation, summarization, and conversational Comparative evaluations revealed the superiority of the transformer model (accuracy rate: 85%) compared with that of the LSTM model (accuracy rate: 65%). The experiments revealed several advantages of the transformer architecture over the LSTM model, such as more effective self-attention, the ability for models to work in parallel with each other, and contextual understanding for better multilingual compatibility. Additionally, our prediction model exhibited effectiveness for disease diagnosis, with accuracy of 85% or greater in identifying the relationship between symptoms and diseases among different demographics. The system provides translation support from English to other languages, with conversion to French (Bilingual Evaluation Understudy score: 0.7), followed by English to Hindi (0.6). The lowest Bilingual Evaluation Understudy score was found for English to Telugu (0.39). This virtual assistant can also perform symptom analysis and disease prediction, with output given in the preferred language of the user.</p>","PeriodicalId":100601,"journal":{"name":"Health Care Science","volume":"4 4","pages":"281-288"},"PeriodicalIF":3.3000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hcs2.70031","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Care Science","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/hcs2.70031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study proposes a virtual healthcare assistant framework designed to provide support in multiple languages for efficient and accurate healthcare assistance. The system employs a transformer model to process sophisticated, multilingual user inputs and gain improved contextual understanding compared to conventional models, including long short-term memory (LSTM) models. In contrast to LSTMs, which sequence processes information and may experience challenges with long-range dependencies, transformers utilize self-attention to learn relationships among every aspect of the input in parallel. This enables them to execute more accurately in various languages and contexts, making them well-suited for applications such as translation, summarization, and conversational Comparative evaluations revealed the superiority of the transformer model (accuracy rate: 85%) compared with that of the LSTM model (accuracy rate: 65%). The experiments revealed several advantages of the transformer architecture over the LSTM model, such as more effective self-attention, the ability for models to work in parallel with each other, and contextual understanding for better multilingual compatibility. Additionally, our prediction model exhibited effectiveness for disease diagnosis, with accuracy of 85% or greater in identifying the relationship between symptoms and diseases among different demographics. The system provides translation support from English to other languages, with conversion to French (Bilingual Evaluation Understudy score: 0.7), followed by English to Hindi (0.6). The lowest Bilingual Evaluation Understudy score was found for English to Telugu (0.39). This virtual assistant can also perform symptom analysis and disease prediction, with output given in the preferred language of the user.

Abstract Image

多语言虚拟医疗保健助手
本研究提出了一个虚拟医疗辅助框架,旨在提供多语言支持,以实现高效和准确的医疗辅助。与包括长短期记忆(LSTM)模型在内的传统模型相比,该系统采用变压器模型来处理复杂的多语言用户输入,并获得更好的上下文理解。与lstm相比,lstm对信息进行顺序处理,可能会遇到长期依赖关系的挑战,变压器利用自我关注来并行学习输入的各个方面之间的关系。这使它们能够在各种语言和上下文中更准确地执行,使它们非常适合翻译、摘要和会话等应用程序。对比评估显示,变压器模型(准确率:85%)与LSTM模型(准确率:65%)相比具有优势。实验揭示了transformer体系结构相对于LSTM模型的几个优点,例如更有效的自关注、模型彼此并行工作的能力,以及更好的多语言兼容性的上下文理解。此外,我们的预测模型显示出疾病诊断的有效性,在识别不同人口统计学中症状和疾病之间的关系方面,准确率达到85%或更高。该系统提供从英语到其他语言的翻译支持,包括转换成法语(双语评估替补得分:0.7),然后是英语到印地语(0.6)。英语到泰卢固语的双语评估替补得分最低(0.39)。这个虚拟助手还可以进行症状分析和疾病预测,并以用户首选的语言给出输出。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
0.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信