{"title":"LHR-RFL:基于线性混合奖励的强化焦点学习,用于自动生成放射学报告","authors":"Xiulong Yi;You Fu;Jianzhi Yu;Ruiqing Liu;Hao Zhang;Rong Hua","doi":"10.1109/TMI.2024.3507073","DOIUrl":null,"url":null,"abstract":"Radiology report generation that aims to accurately describe medical findings for given images, is pivotal in contemporary computer-aided diagnosis. Recently, despite considerable progress, current radiology report generation models still struggled to achieve consistent quality across difficult and easy samples, which dramatically impacts their clinical value. To solve this problem, we explore the difficult samples mining in radiology report generation and propose the Linear Hybrid-Reward based Reinforced Focal Learning (LHR-RFL) to effectively guide the model to allocate more attention towards some difficult samples, thereby enhancing its overall performance in both general and intricate scenarios. In implementation, we first propose the Linear Hybrid-Reward (LHR) module to better quantify the learning difficulty, which employs a linear weighting scheme that assigns varying weights to three representative Natural Language Generation (NLG) evaluation metrics. Then, we propose the Reinforced Focal Learning (RFL) to adaptively adjust the contributions of difficult samples during training, thereby augmenting their impact on model optimization. The experimental results demonstrate that our proposed LHR-RFL improves the performance of the base model across all NLG evaluation metrics, achieving an average performance improvement of 20.9% and 13.2% on IU X-ray and MIMIC-CXR datasets, respectively. Further analysis also proves that our LHR-RFL can dramatically improve the quality of reports for difficult samples. The source code will be available at <uri>https://github.com/</uri> SKD-HPC/LHR-RFL.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1494-1504"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LHR-RFL: Linear Hybrid-Reward-Based Reinforced Focal Learning for Automatic Radiology Report Generation\",\"authors\":\"Xiulong Yi;You Fu;Jianzhi Yu;Ruiqing Liu;Hao Zhang;Rong Hua\",\"doi\":\"10.1109/TMI.2024.3507073\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Radiology report generation that aims to accurately describe medical findings for given images, is pivotal in contemporary computer-aided diagnosis. Recently, despite considerable progress, current radiology report generation models still struggled to achieve consistent quality across difficult and easy samples, which dramatically impacts their clinical value. To solve this problem, we explore the difficult samples mining in radiology report generation and propose the Linear Hybrid-Reward based Reinforced Focal Learning (LHR-RFL) to effectively guide the model to allocate more attention towards some difficult samples, thereby enhancing its overall performance in both general and intricate scenarios. In implementation, we first propose the Linear Hybrid-Reward (LHR) module to better quantify the learning difficulty, which employs a linear weighting scheme that assigns varying weights to three representative Natural Language Generation (NLG) evaluation metrics. Then, we propose the Reinforced Focal Learning (RFL) to adaptively adjust the contributions of difficult samples during training, thereby augmenting their impact on model optimization. The experimental results demonstrate that our proposed LHR-RFL improves the performance of the base model across all NLG evaluation metrics, achieving an average performance improvement of 20.9% and 13.2% on IU X-ray and MIMIC-CXR datasets, respectively. Further analysis also proves that our LHR-RFL can dramatically improve the quality of reports for difficult samples. The source code will be available at <uri>https://github.com/</uri> SKD-HPC/LHR-RFL.\",\"PeriodicalId\":94033,\"journal\":{\"name\":\"IEEE transactions on medical imaging\",\"volume\":\"44 3\",\"pages\":\"1494-1504\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on medical imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10769570/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10769570/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LHR-RFL: Linear Hybrid-Reward-Based Reinforced Focal Learning for Automatic Radiology Report Generation
Radiology report generation that aims to accurately describe medical findings for given images, is pivotal in contemporary computer-aided diagnosis. Recently, despite considerable progress, current radiology report generation models still struggled to achieve consistent quality across difficult and easy samples, which dramatically impacts their clinical value. To solve this problem, we explore the difficult samples mining in radiology report generation and propose the Linear Hybrid-Reward based Reinforced Focal Learning (LHR-RFL) to effectively guide the model to allocate more attention towards some difficult samples, thereby enhancing its overall performance in both general and intricate scenarios. In implementation, we first propose the Linear Hybrid-Reward (LHR) module to better quantify the learning difficulty, which employs a linear weighting scheme that assigns varying weights to three representative Natural Language Generation (NLG) evaluation metrics. Then, we propose the Reinforced Focal Learning (RFL) to adaptively adjust the contributions of difficult samples during training, thereby augmenting their impact on model optimization. The experimental results demonstrate that our proposed LHR-RFL improves the performance of the base model across all NLG evaluation metrics, achieving an average performance improvement of 20.9% and 13.2% on IU X-ray and MIMIC-CXR datasets, respectively. Further analysis also proves that our LHR-RFL can dramatically improve the quality of reports for difficult samples. The source code will be available at https://github.com/ SKD-HPC/LHR-RFL.