利用可解释人工智能和大型语言模型提高疟疾和伤寒诊断的可解释性。

IF 2.8 4区 医学 Q2 INFECTIOUS DISEASES
Kingsley Attai, Moses Ekpenyong, Constance Amannah, Daniel Asuquo, Peterben Ajuga, Okure Obot, Ekemini Johnson, Anietie John, Omosivie Maduka, Christie Akwaowo, Faith-Michael Uzoka
{"title":"利用可解释人工智能和大型语言模型提高疟疾和伤寒诊断的可解释性。","authors":"Kingsley Attai, Moses Ekpenyong, Constance Amannah, Daniel Asuquo, Peterben Ajuga, Okure Obot, Ekemini Johnson, Anietie John, Omosivie Maduka, Christie Akwaowo, Faith-Michael Uzoka","doi":"10.3390/tropicalmed9090216","DOIUrl":null,"url":null,"abstract":"<p><p>Malaria and Typhoid fever are prevalent diseases in tropical regions, and both are exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities of these diseases due to the presence of similar symptoms. Although machine learning (ML) models offer accurate predictions, they operate as \"black boxes\" with non-interpretable decision-making processes, making it challenging for healthcare providers to comprehend how the conclusions are reached. This study employs explainable AI (XAI) models such as Local Interpretable Model-agnostic Explanations (LIME), and Large Language Models (LLMs) like GPT to clarify diagnostic results for healthcare workers, building trust and transparency in medical diagnostics by describing which symptoms had the greatest impact on the model's decisions and providing clear, understandable explanations. The models were implemented on Google Colab and Visual Studio Code because of their rich libraries and extensions. Results showed that the Random Forest model outperformed the other tested models; in addition, important features were identified with the LIME plots while ChatGPT 3.5 had a comparative advantage over other LLMs. The study integrates RF, LIME, and GPT in building a mobile app to enhance the interpretability and transparency in malaria and typhoid diagnosis system. Despite its promising results, the system's performance is constrained by the quality of the dataset. Additionally, while LIME and GPT improve transparency, they may introduce complexities in real-time deployment due to computational demands and the need for internet service to maintain relevance and accuracy. The findings suggest that AI-driven diagnostic systems can significantly enhance healthcare delivery in environments with limited resources, and future works can explore the applicability of this framework to other medical conditions and datasets.</p>","PeriodicalId":23330,"journal":{"name":"Tropical Medicine and Infectious Disease","volume":null,"pages":null},"PeriodicalIF":2.8000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11436130/pdf/","citationCount":"0","resultStr":"{\"title\":\"Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models.\",\"authors\":\"Kingsley Attai, Moses Ekpenyong, Constance Amannah, Daniel Asuquo, Peterben Ajuga, Okure Obot, Ekemini Johnson, Anietie John, Omosivie Maduka, Christie Akwaowo, Faith-Michael Uzoka\",\"doi\":\"10.3390/tropicalmed9090216\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Malaria and Typhoid fever are prevalent diseases in tropical regions, and both are exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities of these diseases due to the presence of similar symptoms. Although machine learning (ML) models offer accurate predictions, they operate as \\\"black boxes\\\" with non-interpretable decision-making processes, making it challenging for healthcare providers to comprehend how the conclusions are reached. This study employs explainable AI (XAI) models such as Local Interpretable Model-agnostic Explanations (LIME), and Large Language Models (LLMs) like GPT to clarify diagnostic results for healthcare workers, building trust and transparency in medical diagnostics by describing which symptoms had the greatest impact on the model's decisions and providing clear, understandable explanations. The models were implemented on Google Colab and Visual Studio Code because of their rich libraries and extensions. Results showed that the Random Forest model outperformed the other tested models; in addition, important features were identified with the LIME plots while ChatGPT 3.5 had a comparative advantage over other LLMs. The study integrates RF, LIME, and GPT in building a mobile app to enhance the interpretability and transparency in malaria and typhoid diagnosis system. Despite its promising results, the system's performance is constrained by the quality of the dataset. Additionally, while LIME and GPT improve transparency, they may introduce complexities in real-time deployment due to computational demands and the need for internet service to maintain relevance and accuracy. The findings suggest that AI-driven diagnostic systems can significantly enhance healthcare delivery in environments with limited resources, and future works can explore the applicability of this framework to other medical conditions and datasets.</p>\",\"PeriodicalId\":23330,\"journal\":{\"name\":\"Tropical Medicine and Infectious Disease\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11436130/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Tropical Medicine and Infectious Disease\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3390/tropicalmed9090216\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INFECTIOUS DISEASES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tropical Medicine and Infectious Disease","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/tropicalmed9090216","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFECTIOUS DISEASES","Score":null,"Total":0}
引用次数: 0

摘要

疟疾和伤寒是热带地区的流行病,两者都因治疗方案不明确、耐药性和环境因素而恶化。及时、准确的诊断对于提高可及性和降低死亡率至关重要。由于症状相似,传统诊断方法无法有效捕捉这些疾病的复杂性。虽然机器学习(ML)模型能提供准确的预测,但它们的运作就像 "黑盒子 "一样,决策过程不可解释,这使得医疗服务提供者很难理解结论是如何得出的。本研究采用可解释的人工智能(XAI)模型,如本地可解释模型-诊断性解释(LIME),以及大型语言模型(LLM),如 GPT,为医疗工作者澄清诊断结果,通过描述哪些症状对模型的决策影响最大,并提供清晰易懂的解释,建立医疗诊断的信任度和透明度。这些模型是在 Google Colab 和 Visual Studio Code 上实现的,因为它们有丰富的库和扩展功能。结果表明,随机森林模型的表现优于其他测试模型;此外,LIME 图确定了重要特征,而 ChatGPT 3.5 与其他 LLM 相比具有比较优势。本研究将 RF、LIME 和 GPT 整合到一个移动应用程序中,以提高疟疾和伤寒诊断系统的可解释性和透明度。尽管取得了可喜的成果,但该系统的性能受到了数据集质量的限制。此外,虽然 LIME 和 GPT 提高了透明度,但由于计算需求以及需要互联网服务来保持相关性和准确性,它们可能会给实时部署带来复杂性。研究结果表明,人工智能驱动的诊断系统可以在资源有限的环境中大大提高医疗服务的质量,未来的工作可以探索这一框架在其他医疗条件和数据集上的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models.

Malaria and Typhoid fever are prevalent diseases in tropical regions, and both are exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities of these diseases due to the presence of similar symptoms. Although machine learning (ML) models offer accurate predictions, they operate as "black boxes" with non-interpretable decision-making processes, making it challenging for healthcare providers to comprehend how the conclusions are reached. This study employs explainable AI (XAI) models such as Local Interpretable Model-agnostic Explanations (LIME), and Large Language Models (LLMs) like GPT to clarify diagnostic results for healthcare workers, building trust and transparency in medical diagnostics by describing which symptoms had the greatest impact on the model's decisions and providing clear, understandable explanations. The models were implemented on Google Colab and Visual Studio Code because of their rich libraries and extensions. Results showed that the Random Forest model outperformed the other tested models; in addition, important features were identified with the LIME plots while ChatGPT 3.5 had a comparative advantage over other LLMs. The study integrates RF, LIME, and GPT in building a mobile app to enhance the interpretability and transparency in malaria and typhoid diagnosis system. Despite its promising results, the system's performance is constrained by the quality of the dataset. Additionally, while LIME and GPT improve transparency, they may introduce complexities in real-time deployment due to computational demands and the need for internet service to maintain relevance and accuracy. The findings suggest that AI-driven diagnostic systems can significantly enhance healthcare delivery in environments with limited resources, and future works can explore the applicability of this framework to other medical conditions and datasets.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Tropical Medicine and Infectious Disease
Tropical Medicine and Infectious Disease Medicine-Public Health, Environmental and Occupational Health
CiteScore
3.90
自引率
10.30%
发文量
353
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信