Qian Tang, Lijun Liu, Xiaobing Yang, Li Liu, Wei Peng
{"title":"从检测到放射报告生成:半监督学习的细粒度多模态对齐。","authors":"Qian Tang, Lijun Liu, Xiaobing Yang, Li Liu, Wei Peng","doi":"10.1007/s10278-025-01650-z","DOIUrl":null,"url":null,"abstract":"<p><p>Radiology report generation plays a critical role in supporting diagnosis, alleviating clinicians' workload, and improving diagnostic accuracy by integrating radiological image content with clinical knowledge. However, most existing models primarily establish coarse-grained mappings between global images and textual reports, often overlooking fine-grained associations between lesion regions and corresponding report content. This limitation affects the accuracy and clinical relevance of the generated reports. To address this, we propose D2R-Net, a lesion-aware radiology report generation model. D2R-Net leverages bounding box annotations for 22 chest diseases to guide the model to focus on clinically significant lesion regions. It employs a global-local dual-branch architecture that fuses global image context with localized lesion features and incorporates a Lesion Region Enhancement Module (LERA) to strengthen the recognition of key lesion regions. Additionally, an implicit alignment mechanism, including Local Alignment Blocks (LAB) and Global Alignment Blocks (GAB), is designed to bridge the semantic gap between visual and textual modalities. Experimental results on the benchmark MIMIC-CXR dataset demonstrate the superior performance of D2R-Net in generating accurate and clinically relevant radiology reports.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From Detection to Radiology Report Generation: Fine-Grained Multi-Modal Alignment with Semi-Supervised Learning.\",\"authors\":\"Qian Tang, Lijun Liu, Xiaobing Yang, Li Liu, Wei Peng\",\"doi\":\"10.1007/s10278-025-01650-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Radiology report generation plays a critical role in supporting diagnosis, alleviating clinicians' workload, and improving diagnostic accuracy by integrating radiological image content with clinical knowledge. However, most existing models primarily establish coarse-grained mappings between global images and textual reports, often overlooking fine-grained associations between lesion regions and corresponding report content. This limitation affects the accuracy and clinical relevance of the generated reports. To address this, we propose D2R-Net, a lesion-aware radiology report generation model. D2R-Net leverages bounding box annotations for 22 chest diseases to guide the model to focus on clinically significant lesion regions. It employs a global-local dual-branch architecture that fuses global image context with localized lesion features and incorporates a Lesion Region Enhancement Module (LERA) to strengthen the recognition of key lesion regions. Additionally, an implicit alignment mechanism, including Local Alignment Blocks (LAB) and Global Alignment Blocks (GAB), is designed to bridge the semantic gap between visual and textual modalities. Experimental results on the benchmark MIMIC-CXR dataset demonstrate the superior performance of D2R-Net in generating accurate and clinically relevant radiology reports.</p>\",\"PeriodicalId\":516858,\"journal\":{\"name\":\"Journal of imaging informatics in medicine\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of imaging informatics in medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s10278-025-01650-z\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-025-01650-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
From Detection to Radiology Report Generation: Fine-Grained Multi-Modal Alignment with Semi-Supervised Learning.
Radiology report generation plays a critical role in supporting diagnosis, alleviating clinicians' workload, and improving diagnostic accuracy by integrating radiological image content with clinical knowledge. However, most existing models primarily establish coarse-grained mappings between global images and textual reports, often overlooking fine-grained associations between lesion regions and corresponding report content. This limitation affects the accuracy and clinical relevance of the generated reports. To address this, we propose D2R-Net, a lesion-aware radiology report generation model. D2R-Net leverages bounding box annotations for 22 chest diseases to guide the model to focus on clinically significant lesion regions. It employs a global-local dual-branch architecture that fuses global image context with localized lesion features and incorporates a Lesion Region Enhancement Module (LERA) to strengthen the recognition of key lesion regions. Additionally, an implicit alignment mechanism, including Local Alignment Blocks (LAB) and Global Alignment Blocks (GAB), is designed to bridge the semantic gap between visual and textual modalities. Experimental results on the benchmark MIMIC-CXR dataset demonstrate the superior performance of D2R-Net in generating accurate and clinically relevant radiology reports.