MVQAS

Haoyue Bai, Xiaoyan Shan, Yefan Huang, Xiaoli Wang
{"title":"MVQAS","authors":"Haoyue Bai, Xiaoyan Shan, Yefan Huang, Xiaoli Wang","doi":"10.1145/3459637.3481971","DOIUrl":null,"url":null,"abstract":"This paper demonstrates a medical visual question answering (VQA) system to address three challenges: 1) medical VQA often lacks large-scale labeled training data which requires huge efforts to build; 2) it is costly to implement and thoroughly compare medical VQA models on self-created datasets; 3) applying general VQA models to the medical domain by transfer learning is challenging due to various visual concepts between general images and medical images. Our system has three main components: data generation, model library, and model practice. To address the first challenge, we first allow users to upload self-collected clinical data such as electronic medical records (EMRs) to the data generation component and provides an annotating tool for labeling the data. Then, the system semi-automatically generates medical VQAs for users. Second, we develop a model library by implementing VQA models for users to evaluate their datasets. Users can do simple configurations by selecting self-interested models. The system then automatically trains the models, conducts extensive experimental evaluation, and reports comprehensive findings. The reports provide new insights into the strengths and weaknesses of selected models. Third, we provide an online chat module for users to communicate with an AI robots for further evaluating the models. The source codes are shared on https://github.com/shyanneshan/VQA-Demo.","PeriodicalId":405296,"journal":{"name":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459637.3481971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This paper demonstrates a medical visual question answering (VQA) system to address three challenges: 1) medical VQA often lacks large-scale labeled training data which requires huge efforts to build; 2) it is costly to implement and thoroughly compare medical VQA models on self-created datasets; 3) applying general VQA models to the medical domain by transfer learning is challenging due to various visual concepts between general images and medical images. Our system has three main components: data generation, model library, and model practice. To address the first challenge, we first allow users to upload self-collected clinical data such as electronic medical records (EMRs) to the data generation component and provides an annotating tool for labeling the data. Then, the system semi-automatically generates medical VQAs for users. Second, we develop a model library by implementing VQA models for users to evaluate their datasets. Users can do simple configurations by selecting self-interested models. The system then automatically trains the models, conducts extensive experimental evaluation, and reports comprehensive findings. The reports provide new insights into the strengths and weaknesses of selected models. Third, we provide an online chat module for users to communicate with an AI robots for further evaluating the models. The source codes are shared on https://github.com/shyanneshan/VQA-Demo.
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信