用少数标记多:在大型图像集合中半自动医学图像模态发现

Szilárd Vajda, D. You, Sameer Kiran Antani, G. Thoma
{"title":"用少数标记多:在大型图像集合中半自动医学图像模态发现","authors":"Szilárd Vajda, D. You, Sameer Kiran Antani, G. Thoma","doi":"10.1109/CICARE.2014.7007850","DOIUrl":null,"url":null,"abstract":"In this paper we present a fast and effective method for labeling images in a large image collection. Image modality detection has been of research interest for querying multimodal medical documents. To accurately predict the different image modalities using complex visual and textual features, we need advanced classification schemes with supervised learning mechanisms and accurate training labels. Our proposed method, on the other hand, uses a multiview-approach and requires minimal expert knowledge to semi-automatically label the images. The images are first projected in different feature spaces, and are then clustered in an unsupervised manner. Only the cluster representative images are labeled by an expert. Other images from the cluster “inherit” the labels from these cluster representatives. The final label assigned to each image is based on a voting mechanism, where each vote is derived from different feature space clustering. Through experiments we show that using only 0.3% of the labels was sufficient to annotate 300,000 medical images with 49.95% accuracy. Although, automatic labeling is not as precise as manual, it saves approximately 700 hours of manual expert labeling, and may be sufficient for next-stage classifier training. We find that for this collection accuracy improvements are feasible with better disparate feature selection or different filtering mechanisms.","PeriodicalId":120730,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Healthcare and e-health (CICARE)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Label the many with a few: Semi-automatic medical image modality discovery in a large image collection\",\"authors\":\"Szilárd Vajda, D. You, Sameer Kiran Antani, G. Thoma\",\"doi\":\"10.1109/CICARE.2014.7007850\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we present a fast and effective method for labeling images in a large image collection. Image modality detection has been of research interest for querying multimodal medical documents. To accurately predict the different image modalities using complex visual and textual features, we need advanced classification schemes with supervised learning mechanisms and accurate training labels. Our proposed method, on the other hand, uses a multiview-approach and requires minimal expert knowledge to semi-automatically label the images. The images are first projected in different feature spaces, and are then clustered in an unsupervised manner. Only the cluster representative images are labeled by an expert. Other images from the cluster “inherit” the labels from these cluster representatives. The final label assigned to each image is based on a voting mechanism, where each vote is derived from different feature space clustering. Through experiments we show that using only 0.3% of the labels was sufficient to annotate 300,000 medical images with 49.95% accuracy. Although, automatic labeling is not as precise as manual, it saves approximately 700 hours of manual expert labeling, and may be sufficient for next-stage classifier training. We find that for this collection accuracy improvements are feasible with better disparate feature selection or different filtering mechanisms.\",\"PeriodicalId\":120730,\"journal\":{\"name\":\"2014 IEEE Symposium on Computational Intelligence in Healthcare and e-health (CICARE)\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE Symposium on Computational Intelligence in Healthcare and e-health (CICARE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CICARE.2014.7007850\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Symposium on Computational Intelligence in Healthcare and e-health (CICARE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICARE.2014.7007850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

本文提出了一种快速有效的图像标记方法。图像模态检测一直是查询多模态医学文献的研究热点。为了使用复杂的视觉和文本特征准确地预测不同的图像模式,我们需要具有监督学习机制和准确训练标签的高级分类方案。另一方面,我们提出的方法使用多视图方法,并且需要最少的专家知识来半自动标记图像。首先将图像投影到不同的特征空间中,然后以无监督的方式聚类。专家只对具有代表性的图像进行标记。来自集群的其他图像“继承”来自这些集群代表的标签。每个图像的最终标签分配基于投票机制,其中每个投票来自不同的特征空间聚类。通过实验表明,仅使用0.3%的标签就足以标注30万张医学图像,准确率为49.95%。虽然自动标注不像人工标注那么精确,但它节省了大约700小时的人工专家标注,并且可能足以用于下一阶段的分类器训练。我们发现,通过更好的不同特征选择或不同的过滤机制,可以提高收集的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Label the many with a few: Semi-automatic medical image modality discovery in a large image collection
In this paper we present a fast and effective method for labeling images in a large image collection. Image modality detection has been of research interest for querying multimodal medical documents. To accurately predict the different image modalities using complex visual and textual features, we need advanced classification schemes with supervised learning mechanisms and accurate training labels. Our proposed method, on the other hand, uses a multiview-approach and requires minimal expert knowledge to semi-automatically label the images. The images are first projected in different feature spaces, and are then clustered in an unsupervised manner. Only the cluster representative images are labeled by an expert. Other images from the cluster “inherit” the labels from these cluster representatives. The final label assigned to each image is based on a voting mechanism, where each vote is derived from different feature space clustering. Through experiments we show that using only 0.3% of the labels was sufficient to annotate 300,000 medical images with 49.95% accuracy. Although, automatic labeling is not as precise as manual, it saves approximately 700 hours of manual expert labeling, and may be sufficient for next-stage classifier training. We find that for this collection accuracy improvements are feasible with better disparate feature selection or different filtering mechanisms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信