基于本体的深度学习,使用多模态融合对医学图像进行分类

Hela Yahyaoui, Fethi Ghazouani, I. Farah
{"title":"基于本体的深度学习,使用多模态融合对医学图像进行分类","authors":"Hela Yahyaoui, Fethi Ghazouani, I. Farah","doi":"10.1109/ICOTEN52080.2021.9493469","DOIUrl":null,"url":null,"abstract":"Brain tumor is regarded as one of the most perilous diseases, with Glioma being the most prevalent form of primary brain tumor. Brain tumor classification, by playing the part of a treatment guide, makes diagnosis easier by providing acquisition tools for medical imagery providing various modalities that are fused for brain tumor classification. Therefore, to perform this task, existing works fuse either 2D brain MRI image slices or 3D brain images. In this paper, we propose a novel semantic method for MRI brain tumor classification using a multimodal fusion of 2D and 3D MRI images. The proposed method raises two major challenges: the semantic classification and the fusion of 2D and 3D images. It consists of three levels: preprocessing, classification, and fusion. The preprocessing level has a considerable impact on the results. At the classification level, we used two deep learning models and two heterogeneous datasets. The DenseNet model is used to classify 2D brain images into three brain tumor categories (Glioma, Meningioma, and Pituitary tumor). The 3D-CNN model is designed for glioma grading (High/Low-grade glioma) using the 3D brain images. At the fusion level, we used specific-domain ontology to perform the fusion of the output classes. The evaluation of the proposed approach on the test set has shown good results and the classification accuracy rate reached 92.06% and 85% for DenseNet and 3D CNN models respectively and 100% at the fusion level.","PeriodicalId":308802,"journal":{"name":"2021 International Congress of Advanced Technology and Engineering (ICOTEN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Deep learning guided by an ontology for medical images classification using a multimodal fusion\",\"authors\":\"Hela Yahyaoui, Fethi Ghazouani, I. Farah\",\"doi\":\"10.1109/ICOTEN52080.2021.9493469\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain tumor is regarded as one of the most perilous diseases, with Glioma being the most prevalent form of primary brain tumor. Brain tumor classification, by playing the part of a treatment guide, makes diagnosis easier by providing acquisition tools for medical imagery providing various modalities that are fused for brain tumor classification. Therefore, to perform this task, existing works fuse either 2D brain MRI image slices or 3D brain images. In this paper, we propose a novel semantic method for MRI brain tumor classification using a multimodal fusion of 2D and 3D MRI images. The proposed method raises two major challenges: the semantic classification and the fusion of 2D and 3D images. It consists of three levels: preprocessing, classification, and fusion. The preprocessing level has a considerable impact on the results. At the classification level, we used two deep learning models and two heterogeneous datasets. The DenseNet model is used to classify 2D brain images into three brain tumor categories (Glioma, Meningioma, and Pituitary tumor). The 3D-CNN model is designed for glioma grading (High/Low-grade glioma) using the 3D brain images. At the fusion level, we used specific-domain ontology to perform the fusion of the output classes. The evaluation of the proposed approach on the test set has shown good results and the classification accuracy rate reached 92.06% and 85% for DenseNet and 3D CNN models respectively and 100% at the fusion level.\",\"PeriodicalId\":308802,\"journal\":{\"name\":\"2021 International Congress of Advanced Technology and Engineering (ICOTEN)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Congress of Advanced Technology and Engineering (ICOTEN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOTEN52080.2021.9493469\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Congress of Advanced Technology and Engineering (ICOTEN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOTEN52080.2021.9493469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

脑肿瘤被认为是最危险的疾病之一,胶质瘤是最常见的原发性脑肿瘤。脑肿瘤分类,通过发挥治疗指南的作用,通过提供医学图像的采集工具,为脑肿瘤分类提供融合的各种模式,使诊断更容易。因此,为了完成这项任务,现有的工作要么融合二维脑MRI图像切片,要么融合三维脑图像。在本文中,我们提出了一种新的语义方法,用于MRI脑肿瘤分类,使用二维和三维MRI图像的多模态融合。该方法提出了语义分类和二维和三维图像融合两大挑战。它包括预处理、分类和融合三个层次。预处理水平对结果有相当大的影响。在分类层面,我们使用了两个深度学习模型和两个异构数据集。DenseNet模型用于将二维脑图像分为三种脑肿瘤类别(胶质瘤、脑膜瘤和垂体瘤)。3D- cnn模型是为使用3D脑图像进行胶质瘤分级(高/低级别胶质瘤)而设计的。在融合层面,我们使用特定领域本体对输出类进行融合。在测试集上对所提出的方法进行了评价,取得了良好的效果,DenseNet和3D CNN模型的分类准确率分别达到92.06%和85%,在融合水平上达到100%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep learning guided by an ontology for medical images classification using a multimodal fusion
Brain tumor is regarded as one of the most perilous diseases, with Glioma being the most prevalent form of primary brain tumor. Brain tumor classification, by playing the part of a treatment guide, makes diagnosis easier by providing acquisition tools for medical imagery providing various modalities that are fused for brain tumor classification. Therefore, to perform this task, existing works fuse either 2D brain MRI image slices or 3D brain images. In this paper, we propose a novel semantic method for MRI brain tumor classification using a multimodal fusion of 2D and 3D MRI images. The proposed method raises two major challenges: the semantic classification and the fusion of 2D and 3D images. It consists of three levels: preprocessing, classification, and fusion. The preprocessing level has a considerable impact on the results. At the classification level, we used two deep learning models and two heterogeneous datasets. The DenseNet model is used to classify 2D brain images into three brain tumor categories (Glioma, Meningioma, and Pituitary tumor). The 3D-CNN model is designed for glioma grading (High/Low-grade glioma) using the 3D brain images. At the fusion level, we used specific-domain ontology to perform the fusion of the output classes. The evaluation of the proposed approach on the test set has shown good results and the classification accuracy rate reached 92.06% and 85% for DenseNet and 3D CNN models respectively and 100% at the fusion level.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信