MedBLIP:一种基于微调大语言模型的多模态医学问答方法

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Lejun Gong , Jiaming Yang , Shengyuan Han , Yimu Ji
{"title":"MedBLIP:一种基于微调大语言模型的多模态医学问答方法","authors":"Lejun Gong ,&nbsp;Jiaming Yang ,&nbsp;Shengyuan Han ,&nbsp;Yimu Ji","doi":"10.1016/j.compmedimag.2025.102581","DOIUrl":null,"url":null,"abstract":"<div><div>Medical visual question answering is crucial for effectively interpreting medical images containing clinically relevant information. This study proposes a method called MedBLIP (Medical Treatment Bootstrapping Language-Image Pretraining) to tackle visual language generation tasks related to chest X-rays in the medical field. The method combine an image encoder with a large-scale language model, and effectively generates medical question-answering text through a strategy of freezing the image encoder based on the BLIP-2 model. Firstly, chest X-ray images are preprocessed, and an image sample generation algorithm is used to enhance the text data of doctor-patient question-answering, thereby increasing data diversity. Then, a multi-layer convolutional image feature extractor is introduced to better capture the feature representation of medical images. During the fine-tuning process of the large language generation model, a new unfreezing strategy is proposed, which is to unfreeze different proportions of the weights of the fully connected layer to adapt to the data in the medical field. The image feature extractor is responsible for extracting key features from images, providing the model with rich visual information, while the text feature extractor accurately captures the essential requirements of the user's question. Through their synergistic interaction, the model can more effectively integrate medical images and user inquiries, thereby generating more accurate and relevant output content. The experimental results show that unfreezing 31.25 % of the weights of the fully connected layer can significantly improve the performance of the model, with ROUGE-L reaching 66.12 %, and providing a more accurate and efficient answer generation solution for the medical field. The method of this study has potential applications in the field of medical language generation tasks. Although the proposed model cannot yet fully replace human radiologists, it plays an indispensable role in improving diagnostic efficiency, assisting decision-making, and supporting medical research. With continuous technological advancements, the model's performance will be further enhanced, and its application value in the medical field will become even more significant. The algorithm implementation can be obtained from <span><span>https://github.com/JiminFohill/MedicalChat.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102581"},"PeriodicalIF":5.4000,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MedBLIP: A multimodal method of medical question-answering based on fine-tuning large language model\",\"authors\":\"Lejun Gong ,&nbsp;Jiaming Yang ,&nbsp;Shengyuan Han ,&nbsp;Yimu Ji\",\"doi\":\"10.1016/j.compmedimag.2025.102581\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Medical visual question answering is crucial for effectively interpreting medical images containing clinically relevant information. This study proposes a method called MedBLIP (Medical Treatment Bootstrapping Language-Image Pretraining) to tackle visual language generation tasks related to chest X-rays in the medical field. The method combine an image encoder with a large-scale language model, and effectively generates medical question-answering text through a strategy of freezing the image encoder based on the BLIP-2 model. Firstly, chest X-ray images are preprocessed, and an image sample generation algorithm is used to enhance the text data of doctor-patient question-answering, thereby increasing data diversity. Then, a multi-layer convolutional image feature extractor is introduced to better capture the feature representation of medical images. During the fine-tuning process of the large language generation model, a new unfreezing strategy is proposed, which is to unfreeze different proportions of the weights of the fully connected layer to adapt to the data in the medical field. The image feature extractor is responsible for extracting key features from images, providing the model with rich visual information, while the text feature extractor accurately captures the essential requirements of the user's question. Through their synergistic interaction, the model can more effectively integrate medical images and user inquiries, thereby generating more accurate and relevant output content. The experimental results show that unfreezing 31.25 % of the weights of the fully connected layer can significantly improve the performance of the model, with ROUGE-L reaching 66.12 %, and providing a more accurate and efficient answer generation solution for the medical field. The method of this study has potential applications in the field of medical language generation tasks. Although the proposed model cannot yet fully replace human radiologists, it plays an indispensable role in improving diagnostic efficiency, assisting decision-making, and supporting medical research. With continuous technological advancements, the model's performance will be further enhanced, and its application value in the medical field will become even more significant. The algorithm implementation can be obtained from <span><span>https://github.com/JiminFohill/MedicalChat.git</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"124 \",\"pages\":\"Article 102581\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2025-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611125000904\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125000904","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

医学视觉问答对于有效解释包含临床相关信息的医学图像至关重要。本研究提出了一种名为MedBLIP (Medical Treatment Bootstrapping language - image Pretraining)的方法来解决医学领域与胸部x射线相关的视觉语言生成任务。该方法将图像编码器与大规模语言模型相结合,通过基于BLIP-2模型的图像编码器冻结策略,有效生成医学问答文本。首先,对胸部x线图像进行预处理,利用图像样本生成算法对医患问答文本数据进行增强,增加数据多样性;然后,引入多层卷积图像特征提取器,更好地捕获医学图像的特征表示。在大语言生成模型的微调过程中,提出了一种新的解冻策略,即对全连接层的权值按不同比例进行解冻,以适应医学领域的数据。图像特征提取器负责从图像中提取关键特征,为模型提供丰富的视觉信息,而文本特征提取器则准确捕捉用户问题的本质需求。通过两者的协同交互,模型可以更有效地整合医学图像和用户查询,从而产生更准确、更相关的输出内容。实验结果表明,解冻31.25 %的全连接层权值可以显著提高模型的性能,其中ROUGE-L达到66.12 %,为医疗领域提供了更准确、更高效的答案生成方案。该方法在医学语言生成任务中具有潜在的应用前景。虽然所提出的模型还不能完全取代人类放射科医生,但它在提高诊断效率、辅助决策和支持医学研究方面发挥着不可或缺的作用。随着技术的不断进步,该模型的性能将进一步提升,其在医疗领域的应用价值将更加显著。算法实现可以从https://github.com/JiminFohill/MedicalChat.git获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MedBLIP: A multimodal method of medical question-answering based on fine-tuning large language model
Medical visual question answering is crucial for effectively interpreting medical images containing clinically relevant information. This study proposes a method called MedBLIP (Medical Treatment Bootstrapping Language-Image Pretraining) to tackle visual language generation tasks related to chest X-rays in the medical field. The method combine an image encoder with a large-scale language model, and effectively generates medical question-answering text through a strategy of freezing the image encoder based on the BLIP-2 model. Firstly, chest X-ray images are preprocessed, and an image sample generation algorithm is used to enhance the text data of doctor-patient question-answering, thereby increasing data diversity. Then, a multi-layer convolutional image feature extractor is introduced to better capture the feature representation of medical images. During the fine-tuning process of the large language generation model, a new unfreezing strategy is proposed, which is to unfreeze different proportions of the weights of the fully connected layer to adapt to the data in the medical field. The image feature extractor is responsible for extracting key features from images, providing the model with rich visual information, while the text feature extractor accurately captures the essential requirements of the user's question. Through their synergistic interaction, the model can more effectively integrate medical images and user inquiries, thereby generating more accurate and relevant output content. The experimental results show that unfreezing 31.25 % of the weights of the fully connected layer can significantly improve the performance of the model, with ROUGE-L reaching 66.12 %, and providing a more accurate and efficient answer generation solution for the medical field. The method of this study has potential applications in the field of medical language generation tasks. Although the proposed model cannot yet fully replace human radiologists, it plays an indispensable role in improving diagnostic efficiency, assisting decision-making, and supporting medical research. With continuous technological advancements, the model's performance will be further enhanced, and its application value in the medical field will become even more significant. The algorithm implementation can be obtained from https://github.com/JiminFohill/MedicalChat.git.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信