M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks.

Bidur Khanal, Binod Bhattarai, Bishesh Khanal, Danail Stoyanov, Cristian A Linte
{"title":"M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks.","authors":"Bidur Khanal, Binod Bhattarai, Bishesh Khanal, Danail Stoyanov, Cristian A Linte","doi":"10.1007/978-3-031-48593-0_4","DOIUrl":null,"url":null,"abstract":"<p><p>Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.</p>","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"14122 ","pages":"48-63"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11328674/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-031-48593-0_4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/12/2 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.

M-VAAL:用于下游医学图像分析任务的多模态变异对抗主动学习。
在医学领域,获取正确注释的数据成本高昂,因为这需要专家、耗时的协议和严格的验证。主动学习试图通过主动抽取信息量最大的示例进行注释,最大限度地减少对大量注释样本的需求。这些实例对提高有监督机器学习模型的性能有很大帮助,因此,主动学习在基于深度学习的诊断、临床评估和治疗计划中选择最合适的信息方面可以发挥至关重要的作用。虽然现有的一些研究提出了在医学图像分析中抽取最佳实例进行标注的方法,但这些方法不具有任务区分性,也没有在抽样器中使用多模态辅助信息,而多模态辅助信息具有提高鲁棒性的潜力。因此,在这项工作中,我们提出了一种多模态变异对抗主动学习(M-VAAL)方法,利用来自其他模态的辅助信息来增强主动采样。我们将该方法应用于两个数据集:i)使用 BraTS2018 数据集进行脑肿瘤分割和多标签分类;ii)使用 COVID-QU-Ex 数据集进行胸部 X 光图像分类。我们的研究结果表明,在注释有限的情况下,我们有望实现数据高效学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信