{"title":"多语言虚拟医疗保健助手","authors":"Geetika Munjal, Piyush Agarwal, Lakshay Goyal, Nandy Samiran","doi":"10.1002/hcs2.70031","DOIUrl":null,"url":null,"abstract":"<p>This study proposes a virtual healthcare assistant framework designed to provide support in multiple languages for efficient and accurate healthcare assistance. The system employs a transformer model to process sophisticated, multilingual user inputs and gain improved contextual understanding compared to conventional models, including long short-term memory (LSTM) models. In contrast to LSTMs, which sequence processes information and may experience challenges with long-range dependencies, transformers utilize self-attention to learn relationships among every aspect of the input in parallel. This enables them to execute more accurately in various languages and contexts, making them well-suited for applications such as translation, summarization, and conversational Comparative evaluations revealed the superiority of the transformer model (accuracy rate: 85%) compared with that of the LSTM model (accuracy rate: 65%). The experiments revealed several advantages of the transformer architecture over the LSTM model, such as more effective self-attention, the ability for models to work in parallel with each other, and contextual understanding for better multilingual compatibility. Additionally, our prediction model exhibited effectiveness for disease diagnosis, with accuracy of 85% or greater in identifying the relationship between symptoms and diseases among different demographics. The system provides translation support from English to other languages, with conversion to French (Bilingual Evaluation Understudy score: 0.7), followed by English to Hindi (0.6). The lowest Bilingual Evaluation Understudy score was found for English to Telugu (0.39). This virtual assistant can also perform symptom analysis and disease prediction, with output given in the preferred language of the user.</p>","PeriodicalId":100601,"journal":{"name":"Health Care Science","volume":"4 4","pages":"281-288"},"PeriodicalIF":3.3000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hcs2.70031","citationCount":"0","resultStr":"{\"title\":\"Multilingual Virtual Healthcare Assistant\",\"authors\":\"Geetika Munjal, Piyush Agarwal, Lakshay Goyal, Nandy Samiran\",\"doi\":\"10.1002/hcs2.70031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This study proposes a virtual healthcare assistant framework designed to provide support in multiple languages for efficient and accurate healthcare assistance. The system employs a transformer model to process sophisticated, multilingual user inputs and gain improved contextual understanding compared to conventional models, including long short-term memory (LSTM) models. In contrast to LSTMs, which sequence processes information and may experience challenges with long-range dependencies, transformers utilize self-attention to learn relationships among every aspect of the input in parallel. This enables them to execute more accurately in various languages and contexts, making them well-suited for applications such as translation, summarization, and conversational Comparative evaluations revealed the superiority of the transformer model (accuracy rate: 85%) compared with that of the LSTM model (accuracy rate: 65%). The experiments revealed several advantages of the transformer architecture over the LSTM model, such as more effective self-attention, the ability for models to work in parallel with each other, and contextual understanding for better multilingual compatibility. Additionally, our prediction model exhibited effectiveness for disease diagnosis, with accuracy of 85% or greater in identifying the relationship between symptoms and diseases among different demographics. The system provides translation support from English to other languages, with conversion to French (Bilingual Evaluation Understudy score: 0.7), followed by English to Hindi (0.6). The lowest Bilingual Evaluation Understudy score was found for English to Telugu (0.39). This virtual assistant can also perform symptom analysis and disease prediction, with output given in the preferred language of the user.</p>\",\"PeriodicalId\":100601,\"journal\":{\"name\":\"Health Care Science\",\"volume\":\"4 4\",\"pages\":\"281-288\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2025-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/hcs2.70031\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Health Care Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/hcs2.70031\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Care Science","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/hcs2.70031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This study proposes a virtual healthcare assistant framework designed to provide support in multiple languages for efficient and accurate healthcare assistance. The system employs a transformer model to process sophisticated, multilingual user inputs and gain improved contextual understanding compared to conventional models, including long short-term memory (LSTM) models. In contrast to LSTMs, which sequence processes information and may experience challenges with long-range dependencies, transformers utilize self-attention to learn relationships among every aspect of the input in parallel. This enables them to execute more accurately in various languages and contexts, making them well-suited for applications such as translation, summarization, and conversational Comparative evaluations revealed the superiority of the transformer model (accuracy rate: 85%) compared with that of the LSTM model (accuracy rate: 65%). The experiments revealed several advantages of the transformer architecture over the LSTM model, such as more effective self-attention, the ability for models to work in parallel with each other, and contextual understanding for better multilingual compatibility. Additionally, our prediction model exhibited effectiveness for disease diagnosis, with accuracy of 85% or greater in identifying the relationship between symptoms and diseases among different demographics. The system provides translation support from English to other languages, with conversion to French (Bilingual Evaluation Understudy score: 0.7), followed by English to Hindi (0.6). The lowest Bilingual Evaluation Understudy score was found for English to Telugu (0.39). This virtual assistant can also perform symptom analysis and disease prediction, with output given in the preferred language of the user.