{"title":"基于自动多模态融合的超参数调谐深度学习脑肿瘤诊断模型","authors":"S. Sandhya, M. Senthil Kumar","doi":"10.1166/jmihi.2022.3942","DOIUrl":null,"url":null,"abstract":"As medical image processing research has progressed, image fusion has emerged as a realistic solution, automatically extracting relevant data from many images before fusing them into a single, unified image. Medical imaging techniques, such as Computed Tomography (CT), Magnetic Resonance\n Imaging (MRI), etc., play a crucial role in the diagnosis and classification of brain tumors (BT). A single imaging technique is not sufficient for correct diagnosis of the disease. In case the scans are ambiguous, it can lead doctors to incorrect diagnoses, which can be unsafe to the patient.\n The solution to this problem is fusing images from different scans containing complementary information to generate accurate images with minimum uncertainty. This research presents a novel method for the automated identification and classification of brain tumors using multi-modal deep learning\n (AMDL-BTDC). The proposed AMDL-BTDC model initially performs image pre-processing using bilateral filtering (BF) technique. Next, feature vectors are generated using a pair of pre-trained deep learning models called EfficientNet and SqueezeNet. Slime Mold Algorithm is used to acquire the DL\n models’ optimal hyperparameter settings (SMA). In the end, an autoencoder (AE) model is used for BT classification once features have been fused. The suggested model’s superior performance over other techniques under diverse measures was validated by extensive testing on the benchmark\n medical imaging dataset.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated Multimodal Fusion Based Hyperparameter Tuned Deep Learning Model for Brain Tumor Diagnosis\",\"authors\":\"S. Sandhya, M. Senthil Kumar\",\"doi\":\"10.1166/jmihi.2022.3942\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As medical image processing research has progressed, image fusion has emerged as a realistic solution, automatically extracting relevant data from many images before fusing them into a single, unified image. Medical imaging techniques, such as Computed Tomography (CT), Magnetic Resonance\\n Imaging (MRI), etc., play a crucial role in the diagnosis and classification of brain tumors (BT). A single imaging technique is not sufficient for correct diagnosis of the disease. In case the scans are ambiguous, it can lead doctors to incorrect diagnoses, which can be unsafe to the patient.\\n The solution to this problem is fusing images from different scans containing complementary information to generate accurate images with minimum uncertainty. This research presents a novel method for the automated identification and classification of brain tumors using multi-modal deep learning\\n (AMDL-BTDC). The proposed AMDL-BTDC model initially performs image pre-processing using bilateral filtering (BF) technique. Next, feature vectors are generated using a pair of pre-trained deep learning models called EfficientNet and SqueezeNet. Slime Mold Algorithm is used to acquire the DL\\n models’ optimal hyperparameter settings (SMA). In the end, an autoencoder (AE) model is used for BT classification once features have been fused. The suggested model’s superior performance over other techniques under diverse measures was validated by extensive testing on the benchmark\\n medical imaging dataset.\",\"PeriodicalId\":49032,\"journal\":{\"name\":\"Journal of Medical Imaging and Health Informatics\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Imaging and Health Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1166/jmihi.2022.3942\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging and Health Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1166/jmihi.2022.3942","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automated Multimodal Fusion Based Hyperparameter Tuned Deep Learning Model for Brain Tumor Diagnosis
As medical image processing research has progressed, image fusion has emerged as a realistic solution, automatically extracting relevant data from many images before fusing them into a single, unified image. Medical imaging techniques, such as Computed Tomography (CT), Magnetic Resonance
Imaging (MRI), etc., play a crucial role in the diagnosis and classification of brain tumors (BT). A single imaging technique is not sufficient for correct diagnosis of the disease. In case the scans are ambiguous, it can lead doctors to incorrect diagnoses, which can be unsafe to the patient.
The solution to this problem is fusing images from different scans containing complementary information to generate accurate images with minimum uncertainty. This research presents a novel method for the automated identification and classification of brain tumors using multi-modal deep learning
(AMDL-BTDC). The proposed AMDL-BTDC model initially performs image pre-processing using bilateral filtering (BF) technique. Next, feature vectors are generated using a pair of pre-trained deep learning models called EfficientNet and SqueezeNet. Slime Mold Algorithm is used to acquire the DL
models’ optimal hyperparameter settings (SMA). In the end, an autoencoder (AE) model is used for BT classification once features have been fused. The suggested model’s superior performance over other techniques under diverse measures was validated by extensive testing on the benchmark
medical imaging dataset.
期刊介绍:
Journal of Medical Imaging and Health Informatics (JMIHI) is a medium to disseminate novel experimental and theoretical research results in the field of biomedicine, biology, clinical, rehabilitation engineering, medical image processing, bio-computing, D2H2, and other health related areas. As an example, the Distributed Diagnosis and Home Healthcare (D2H2) aims to improve the quality of patient care and patient wellness by transforming the delivery of healthcare from a central, hospital-based system to one that is more distributed and home-based. Different medical imaging modalities used for extraction of information from MRI, CT, ultrasound, X-ray, thermal, molecular and fusion of its techniques is the focus of this journal.