CSAMDT: Conditional Self Attention Memory-Driven Transformers for Radiology Report Generation from Chest X-Ray.

Iqra Shahzadi, Tahir Mustafa Madni, Uzair Iqbal Janjua, Ghanwa Batool, Bushra Naz, Muhammad Qasim Ali
{"title":"CSAMDT: Conditional Self Attention Memory-Driven Transformers for Radiology Report Generation from Chest X-Ray.","authors":"Iqra Shahzadi, Tahir Mustafa Madni, Uzair Iqbal Janjua, Ghanwa Batool, Bushra Naz, Muhammad Qasim Ali","doi":"10.1007/s10278-024-01126-6","DOIUrl":null,"url":null,"abstract":"<p><p>A radiology report plays a crucial role in guiding patient treatment, but writing these reports is a time-consuming task that demands a radiologist's expertise. In response to this challenge, researchers in the subfields of artificial intelligence for healthcare have explored techniques for automatically interpreting radiographic images and generating free-text reports, while much of the research on medical report creation has focused on image captioning methods without adequately addressing particular report aspects. This study introduces a Conditional Self Attention Memory-Driven Transformer model for generating radiological reports. The model operates in two phases: initially, a multi-label classification model, utilizing ResNet152 v2 as an encoder, is employed for feature extraction and multiple disease diagnosis. In the second phase, the Conditional Self Attention Memory-Driven Transformer serves as a decoder, utilizing self-attention memory-driven transformers to generate text reports. Comprehensive experimentation was conducted to compare existing and proposed techniques based on Bilingual Evaluation Understudy (BLEU) scores ranging from 1 to 4. The model outperforms the other state-of-the-art techniques by increasing the BLEU 1 (0.475), BLEU 2 (0.358), BLEU 3 (0.229), and BLEU 4 (0.165) respectively. This study's findings can alleviate radiologists' workloads and enhance clinical workflows by introducing an autonomous radiological report generation system.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2825-2837"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612068/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01126-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/3 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A radiology report plays a crucial role in guiding patient treatment, but writing these reports is a time-consuming task that demands a radiologist's expertise. In response to this challenge, researchers in the subfields of artificial intelligence for healthcare have explored techniques for automatically interpreting radiographic images and generating free-text reports, while much of the research on medical report creation has focused on image captioning methods without adequately addressing particular report aspects. This study introduces a Conditional Self Attention Memory-Driven Transformer model for generating radiological reports. The model operates in two phases: initially, a multi-label classification model, utilizing ResNet152 v2 as an encoder, is employed for feature extraction and multiple disease diagnosis. In the second phase, the Conditional Self Attention Memory-Driven Transformer serves as a decoder, utilizing self-attention memory-driven transformers to generate text reports. Comprehensive experimentation was conducted to compare existing and proposed techniques based on Bilingual Evaluation Understudy (BLEU) scores ranging from 1 to 4. The model outperforms the other state-of-the-art techniques by increasing the BLEU 1 (0.475), BLEU 2 (0.358), BLEU 3 (0.229), and BLEU 4 (0.165) respectively. This study's findings can alleviate radiologists' workloads and enhance clinical workflows by introducing an autonomous radiological report generation system.

Abstract Image

CSAMDT:根据胸部 X 光片生成放射学报告的条件自注意记忆驱动变换器。
放射学报告在指导病人治疗方面起着至关重要的作用,但撰写这些报告是一项耗时的任务,需要放射科医生的专业知识。为了应对这一挑战,医疗保健人工智能子领域的研究人员探索了自动解读放射图像和生成自由文本报告的技术,而有关医学报告创建的大部分研究都集中在图像标题方法上,没有充分解决报告的特定方面。本研究介绍了一种用于生成放射报告的条件自注意记忆驱动转换器模型。该模型分两个阶段运行:首先,利用 ResNet152 v2 作为编码器,采用多标签分类模型进行特征提取和多种疾病诊断。在第二阶段,条件自注意记忆驱动转换器作为解码器,利用自注意记忆驱动转换器生成文本报告。该模型分别提高了 BLEU 1 (0.475)、BLEU 2 (0.358)、BLEU 3 (0.229) 和 BLEU 4 (0.165),优于其他最先进的技术。这项研究的结果可以通过引入自主放射报告生成系统来减轻放射科医生的工作量并改进临床工作流程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信