Improvement of Multimodal Images Classification Based on DSMT Using Visual Saliency Model Fusion With SVM

Hanan Anzid, Gaëtan Le Goïc, A. Bekkari, A. Mansouri, D. Mammass
{"title":"Improvement of Multimodal Images Classification Based on DSMT Using Visual Saliency Model Fusion With SVM","authors":"Hanan Anzid, Gaëtan Le Goïc, A. Bekkari, A. Mansouri, D. Mammass","doi":"10.24297/IJCT.V18I0.7956","DOIUrl":null,"url":null,"abstract":"Multimodal images carry available information that can be complementary, redundant information, and overcomes the various problems attached to the unimodal classification task, by modeling and combining these information together. Although, this classification gives acceptable classification results, it still does not reach the level of the visual perception model that has a great ability to classify easily observed scene thanks to the powerful mechanism of the human brain. \n In order to improve the classification task in multimodal image area, we propose a methodology based on Dezert-Smarandache formalism (DSmT), allowing fusing the combined spectral and dense SURF features extracted from each modality and pre-classified by the SVM classifier. Then we integrate the visual perception model in the fusion process. \nTo prove the efficiency of the use of salient features in a fusion process with DSmT, the proposed methodology is tested and validated on a large datasets extracted from acquisitions on cultural heritage wall paintings. Each set implements four imaging modalities covering UV, IR, Visible and fluorescence, and the results are promising.","PeriodicalId":161820,"journal":{"name":"INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24297/IJCT.V18I0.7956","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Multimodal images carry available information that can be complementary, redundant information, and overcomes the various problems attached to the unimodal classification task, by modeling and combining these information together. Although, this classification gives acceptable classification results, it still does not reach the level of the visual perception model that has a great ability to classify easily observed scene thanks to the powerful mechanism of the human brain.  In order to improve the classification task in multimodal image area, we propose a methodology based on Dezert-Smarandache formalism (DSmT), allowing fusing the combined spectral and dense SURF features extracted from each modality and pre-classified by the SVM classifier. Then we integrate the visual perception model in the fusion process. To prove the efficiency of the use of salient features in a fusion process with DSmT, the proposed methodology is tested and validated on a large datasets extracted from acquisitions on cultural heritage wall paintings. Each set implements four imaging modalities covering UV, IR, Visible and fluorescence, and the results are promising.
基于视觉显著性模型与SVM融合的DSMT多模态图像分类改进
多模态图像携带的可用信息可以是互补的、冗余的信息,通过对这些信息进行建模和组合,克服了单模态分类任务所附带的各种问题。虽然这种分类给出了可以接受的分类结果,但由于人脑强大的机制,它还没有达到视觉感知模型对容易观察到的场景有很强分类能力的水平。为了改进多模态图像区域的分类任务,我们提出了一种基于Dezert-Smarandache形式(DSmT)的方法,允许融合从每个模态提取的组合光谱和密集SURF特征,并由SVM分类器进行预分类。然后在融合过程中对视觉感知模型进行融合。为了证明在融合过程中使用显著特征与DSmT的效率,在从文物壁画中提取的大型数据集上对所提出的方法进行了测试和验证。每组实现四种成像模式,包括紫外,红外,可见光和荧光,结果是有希望的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信