Bag-of-Features Based Classification of Breast Parenchymal Tissue in the Mammogram via Jointly Selecting and Weighting Visual Words

Jingyan Wang, Yongping Li, Y. Zhang, Honglan Xie, Chao Wang
{"title":"Bag-of-Features Based Classification of Breast Parenchymal Tissue in the Mammogram via Jointly Selecting and Weighting Visual Words","authors":"Jingyan Wang, Yongping Li, Y. Zhang, Honglan Xie, Chao Wang","doi":"10.1109/ICIG.2011.192","DOIUrl":null,"url":null,"abstract":"Automatically classifying the tissues types of region of interest (ROI) in medical imaging has been a important application in computer-aided diagnosis, such as classification of breast parenchymal tissue in the mammogram. Recently, bag-of-features method has show its power in this field, treating each medical image as a set of local features. In this paper, we investigate using the bag-of-features strategy to classify the tissue types in medical imaging applications. Two important issues are considered here: the visual vocabulary learning and weighting. Although there are already plenty of algorithms to deal with them, all of them treat them independently, namely, the vocabulary learned first and then the histogram weighted. Inspired by Auto-Context who learns the features and classier jointly, we try to develop a novel algorithm who learns the vocabulary and weights jointly. The new algorithm, called Joint-ViVo, works in a iterative way. In each iteration, we first learn the weights for each visual word by maximizing the margin of ROI triplets, and then based on the learned weights, we select the most discriminate visual words for the next iteration. We test our algorithm by classifying breast tissue density in mammograms. The results show that Joint-ViVo can perform effectively for classifying tissues and support the idea that vocabulary should be learned jointly with the weights.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 Sixth International Conference on Image and Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIG.2011.192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28

Abstract

Automatically classifying the tissues types of region of interest (ROI) in medical imaging has been a important application in computer-aided diagnosis, such as classification of breast parenchymal tissue in the mammogram. Recently, bag-of-features method has show its power in this field, treating each medical image as a set of local features. In this paper, we investigate using the bag-of-features strategy to classify the tissue types in medical imaging applications. Two important issues are considered here: the visual vocabulary learning and weighting. Although there are already plenty of algorithms to deal with them, all of them treat them independently, namely, the vocabulary learned first and then the histogram weighted. Inspired by Auto-Context who learns the features and classier jointly, we try to develop a novel algorithm who learns the vocabulary and weights jointly. The new algorithm, called Joint-ViVo, works in a iterative way. In each iteration, we first learn the weights for each visual word by maximizing the margin of ROI triplets, and then based on the learned weights, we select the most discriminate visual words for the next iteration. We test our algorithm by classifying breast tissue density in mammograms. The results show that Joint-ViVo can perform effectively for classifying tissues and support the idea that vocabulary should be learned jointly with the weights.
基于特征袋的视觉词联合选择加权乳腺实质组织分类
医学影像中感兴趣区域(ROI)组织类型的自动分类已成为计算机辅助诊断的重要应用,如乳房x光片中乳腺实质组织的分类。近年来,特征袋法(bag-of-features)将每张医学图像视为一组局部特征,在这一领域显示出其强大的功能。在本文中,我们研究了在医学成像应用中使用特征袋策略对组织类型进行分类。这里考虑了两个重要的问题:视觉词汇学习和权重。虽然已经有很多算法来处理它们,但它们都是独立处理的,即先学习词汇,再对直方图进行加权。受自动上下文学习特征和分类器共同学习的启发,我们尝试开发一种联合学习词汇和权重的新算法。这种被称为Joint-ViVo的新算法以一种迭代的方式工作。在每次迭代中,我们首先通过最大化ROI三元组的余量来学习每个视觉词的权值,然后基于学习到的权值,选择识别度最高的视觉词进行下一次迭代。我们通过对乳房x光照片中的乳腺组织密度进行分类来测试我们的算法。结果表明,Joint-ViVo能够有效地对组织进行分类,支持了词汇与权值联合学习的观点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信