Bakhtiar Amaludin, Seifedine Kadry, Fung Fung Ting, David Taniar
{"title":"Toward more accurate diagnosis of multiple sclerosis: Automated lesion segmentation in brain magnetic resonance image using modified U-Net model","authors":"Bakhtiar Amaludin, Seifedine Kadry, Fung Fung Ting, David Taniar","doi":"10.1002/ima.22941","DOIUrl":"10.1002/ima.22941","url":null,"abstract":"<p>Early diagnosis of multiple sclerosis (MS) through the delineation of lesions in the brain magnetic resonance imaging is important in preventing the deteriorating condition of MS. This study aims to develop a modified U-Net model for automating lesions segmentation in MS more accurately. The proposed modified U-Net uses residual dense blocks to replace the standard convolutional stacks and incorporates three axes (axial, sagittal, and coronal) of 2D slice images as input. Furthermore, a custom fusion method is also introduced for merging the predicted lesions from different axes. The model was implemented on ISBI2015 and OpenMS data sets. On ISBI2015, the proposed model achieves the best overall score of 93.090% and DSC of 0.857 on the OpenMS data set.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.22941","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72978055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient skin cancer detection and classification using Improved Adaboost Aphid–Ant Mutualism model","authors":"G. Renith, A. Senthilselvi","doi":"10.1002/ima.22932","DOIUrl":"https://doi.org/10.1002/ima.22932","url":null,"abstract":"Skin cancer is the most common deadly disease caused due to abnormal and uncontrolled growth of cells in the human body. According to a report, annually nearly one million people are affected by skin cancer worldwide. To protect human lives from such life‐threatening diseases, early identification of skin cancer is the only precautionary measure. In recent times, there already exist numerous automated techniques to detect and classify skin lesion malignancies using dermoscopic images. However, analyzing the dermoscopic images becomes an arduous task due to the presence of troublesome features such as light reflections, illumination variations, and uneven shape and dimension. To address the challenges faced during skin cancer recognition process, in this paper, we proposed an efficient intelligent automated system to detect and discriminate the dermoscopic images into malignant or benign. The proposed skin cancer detection model utilizes the HAM10000 dataset for evaluation. The dermoscopic images acquired from the HAM10000 dataset are initially preprocessed to enhance the quality of image and thus making it fit to train the classifier. Afterward, the most significant image patterns are extracted by the AlexNet architecture without any loss of detailed information. Later on, the extracted features are inputted to the proposed Improved Adaboost‐based Aphid–Ant Mutualism (IAB‐AAM) classification model to discriminate the images into malignant and benign categories. The proposed IAB‐AAM approach witnessed extensive enhancement in classification accuracy. The enhanced performance is attributed by integrating the AAM optimization concept with the IAB model. By comparing the performance of the proposed IAB‐AAM with other modern methods in terms of different evaluation indicators namely accuracy, precision, specificity, sensitivity, and f‐measure, the efficiency of the proposed IAB‐AAM technique is analyzed. From the experimental results, it is known that the proposed IAB‐AAM technique attains a greater accuracy rate of 95.7% in detecting skin cancer classes than other compared approaches.","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 6","pages":"1957-1972"},"PeriodicalIF":3.3,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71956828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient deep learning algorithm for the segmentation of cardiac ventricles","authors":"Ciyamala Kushbu Sadhanandan, Inbamalar Tharcis Mariapushpam, Sudha Suresh","doi":"10.1002/ima.22929","DOIUrl":"https://doi.org/10.1002/ima.22929","url":null,"abstract":"For the effective diagnosis of cardio vascular disease (CVD), anatomical characteristics of the heart must be examined, which depends on segmenting the cardiac tissues of interest and then classifying them into appropriate pathological groups. In recent years, deep learning (DL)‐based computer aided design (CAD) segmentation has been employed to automate the segmentation process. Despite the evolution of several DL methods, they still fail due to the shape variation of the heart in patients and the availability of a limited amount of data. This paper proposes an effective Saliency and Active Contour‐based Attention UNet3+ algorithm to segment the ventricles of the heart, which is a challenging task for most researchers, especially with an irregularly shaped right ventricle (RV) that varies over cardiac phases. The algorithm outperforms other state‐of‐the‐art methods in DC metrics, which proves its efficiency in automating the segmentation process.","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 6","pages":"2044-2060"},"PeriodicalIF":3.3,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71953617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aya Nader Salama, M. A. Mohamed, Hanan M. Amer, Mohamed Maher Ata
{"title":"An efficient quantification of COVID-19 in chest CT images with improved semantic segmentation using U-Net deep structure","authors":"Aya Nader Salama, M. A. Mohamed, Hanan M. Amer, Mohamed Maher Ata","doi":"10.1002/ima.22930","DOIUrl":"https://doi.org/10.1002/ima.22930","url":null,"abstract":"The worldwide spread of the coronavirus (COVID‐19) outbreak has proven devastating to public health. The severity of pneumonia relies on a rapid and accurate diagnosis of COVID‐19 in CT images. Accordingly, a computed tomography (CT) scan is an excellent screening tool for detecting COVID‐19. This paper proposes a deep learning‐based strategy for recognizing and segmenting a COVID‐19 lesion from chest CT images, which would introduce an accurate computer aided decision criteria for the physicians about the severity rate of the patients. Two main stages have been proposed for detecting COVID‐19; first, a convolutional neural network (CNN) deep structure recognizes and classifies COVID‐19 from CT images. Second, a U‐Net deep structure segments the COVID‐19 regions in a semantic manner. The proposed system is trained and evaluated on three different CT datasets for COVID‐19, two of which are used to illustrate the system's segmentation performance and the other is to demonstrate the system's classification ability. Experiment results reveal that the proposed CNN can achieve classification accuracy greater than 0.99, and the proposed U‐Net model outperforms the state‐of‐the‐art in segmentation with an IOU greater than 0.92.","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 6","pages":"1882-1901"},"PeriodicalIF":3.3,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71957819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Baiju Karun, T. Arun Prasath, M. Pallikonda Rajasekaran, Rakhee Makreri
{"title":"Glioma detection using EHO based FLAME clustering in MR brain images","authors":"Baiju Karun, T. Arun Prasath, M. Pallikonda Rajasekaran, Rakhee Makreri","doi":"10.1002/ima.22937","DOIUrl":"10.1002/ima.22937","url":null,"abstract":"<p>MRI is a popular imaging method for examining brain tumours. The ability to precisely segment tumours from MRI is absolutely essential for medical diagnostics and surgical planning. Manual tumour segmentation might be unrealistic for more comprehensive studies. Deep learning is the most widely used technique in medical diagnosis. For effective tumour dissection from brain MRI, this paper proposed a novel combination of FLAME and EHO Algorithm. FLAME is a type of clustering method that groups the most similar pixels in to a single cluster. EHO algorithm is one of the nature-inspired metaheuristic optimization algorithms based on the social herding behaviour of elephants and swimming search methods. The proposed methodology's efficiency is validated through testing on various BraTS challenge datasets. The average computational time, mean squared error, peak signal to noise ratio, tanimoto coefficient, and dice score - obtained are 23.3775 s, 0.213, 54.9669 dB, 54.6148%, and 84.053%, respectively.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74231981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grading of steatosis, fibrosis, lobular inflammation, and ballooning from liver pathology images using pre-trained convolutional neural networks","authors":"Hamed Zamanian, Ahmad Shalbaf","doi":"10.1002/ima.22936","DOIUrl":"https://doi.org/10.1002/ima.22936","url":null,"abstract":"This study aims to automatically detect the degree of pathological indices as a reference method for detecting the severity and extent of various liver diseases from pathological images of liver tissue with the help of deep learning algorithms. Grading is done using a collection of pre‐trained convolutional neural networks, including DenseNet121, ResNet50, inceptionv3, MobileNet, EfficientNet‐b1, EfficientNet‐b4, Xception, NASNetMobile, and Vgg16. These algorithms are performed by fine‐tuning the trainable layers of the networks. The results showed that compared to other methods, the EfficientNet‐b1 network provides a better response to grade the stage of liver disease among all indicators from pathological images, due to its structural features. This classification accuracy was 97.26% for fibrosis, 94.1% for steatosis, 90.2% for lobular inflammation, and 98.0% for ballooning. Consequently, this fully automated framework can be very useful in clinical methods and be considered as an assistant or an alternative to the diagnosis of experienced pathologists.","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 6","pages":"2178-2193"},"PeriodicalIF":3.3,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71987310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xue Xia, Kun Zhan, Yuming Fang, Wenhui Jiang, Fei Shen
{"title":"Lesion-aware network for diabetic retinopathy diagnosis","authors":"Xue Xia, Kun Zhan, Yuming Fang, Wenhui Jiang, Fei Shen","doi":"10.1002/ima.22933","DOIUrl":"https://doi.org/10.1002/ima.22933","url":null,"abstract":"Deep learning brought boosts to auto diabetic retinopathy (DR) diagnosis, thus, greatly helping ophthalmologists for early disease detection, which contributes to preventing disease deterioration that may eventually lead to blindness. It has been proved that convolutional neural network (CNN)‐aided lesion identifying or segmentation benefits auto DR screening. The key to fine‐grained lesion tasks mainly lies in: (1) extracting features being both sensitive to tiny lesions and robust against DR‐irrelevant interference, and (2) exploiting and re‐using encoded information to restore lesion locations under extremely imbalanced data distribution. To this end, we propose a CNN‐based DR diagnosis network with attention mechanism involved, termed lesion‐aware network, to better capture lesion information from imbalanced data. Specifically, we design the lesion‐aware module (LAM) to capture noise‐like lesion areas across deeper layers, and the feature‐preserve module (FPM) to assist shallow‐to‐deep feature fusion. Afterward, the proposed lesion‐aware network (LANet) is constructed by embedding the LAM and FPM into the CNN decoders for DR‐related information utilization. The proposed LANet is then further extended to a DR screening network by adding a classification layer. Through experiments on three public fundus datasets with pixel‐level annotations, our method outperforms the mainstream methods with an area under curve of 0.967 in DR screening, and increases the overall average precision by 7.6%, 2.1%, and 1.2% in lesion segmentation on three datasets. Besides, the ablation study validates the effectiveness of the proposed sub‐modules.","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 6","pages":"1914-1928"},"PeriodicalIF":3.3,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71986297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cerebral stroke classification based on fusion model of 3D EmbedConvNext and 3D Bi-LSTM network","authors":"Xinying Wang, Jian Yi, Yang Li","doi":"10.1002/ima.22928","DOIUrl":"https://doi.org/10.1002/ima.22928","url":null,"abstract":"Acute stroke can be effectively treated within 4.5 h. To help doctors judge the onset time of this disease as soon as possible, a fusion model of 3D EmbedConvNext and 3D Bi‐LSTM network was proposed. It uses DWI brain images to distinguish between cases where the stroke onset time is within 4.5 h and beyond. 3D EmbedConvNeXt replaces 2D convolution with 3D convolution based on the original ConvNeXt, and the downsample layer uses the self‐attention module. 3D features of EmbedConvNeXt were output to 3D Bi‐LSTM for learning. 3D Bi‐LSTM is mainly used to obtain the spatial relationship of multiple planes (axial, coronal, and sagittal), to effectively learn the 3D time series information in the depth, length, and width directions of the feature maps. The classification experiments on stroke data sets provided by cooperative hospitals show that our model achieves an accuracy of 0.83.","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 6","pages":"1944-1956"},"PeriodicalIF":3.3,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71947084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applying machine learning to screen for acute myocardial infarction-related biomarkers and immune infiltration features and validate it clinically and experimentally","authors":"Zhenrun Zhan, Pengyong Han, Xu Tang, Jinpeng Yang, Xiaodan Bi, Tingting Zhao","doi":"10.1002/ima.22927","DOIUrl":"https://doi.org/10.1002/ima.22927","url":null,"abstract":"Acute myocardial infarction (AMI) has been responsible for 8.5 million deaths worldwide each year over the past decade and is the leading cause of death. It is a severe illness worldwide and can happen in multiple age categories. Despite the significant progress in fundamental and clinical studies of AMI, biomarkers for AMI development have not been adequately investigated. The present research aimed to characterize potential new biomarkers of AMI by comprehensive analysis and to explore the immune infiltration characteristics of this pathophysiological process. In this study, we identified 68 DEGs and performed gene set enrichment analysis, GO, disease oncology, and KEGG analysis, and the results suggested that several functional signaling pathways and essential genes were strongly related to the onset and progression of AMI. In addition, combining multiple algorithms, FCER1G, CLEC4D, SRGN, and SLC11A1 were determined to be prospective biomarkers of AMI and showed good diagnostic value. Immuno‐infiltration analysis suggested that neutrophils, CD8+ T cells, monocytes, and M0 macrophages might be involved in the onset and progress of AMI. In conclusion, a combined approach was employed to select biomarkers associated with AMI and to probe the critical function of immune cells in the progression of AMI. In addition, clinical studies were applied to analyze the correlation between the occurrence of AMI and lipid dysregulation.","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 6","pages":"2023-2043"},"PeriodicalIF":3.3,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71917652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiscale attention network for retinal vein occlusion classification with multicolor image","authors":"Xiaochen Wang, Yanhui Ding, Yuanjie Zheng","doi":"10.1002/ima.22917","DOIUrl":"https://doi.org/10.1002/ima.22917","url":null,"abstract":"Recently, automatic diagnostic approaches widely use various retinal images to classify ocular diseases. And retinal vein occlusion (RVO) is the second most common retinal vascular disease after diabetic retinopathy. In clinical practice, ophthalmologists are usually accustomed to resorting to images of one modality. But single‐modality images often ignore other modality‐specific information. To solve this problem, this paper uses a novel retinal imaging, the multicolor (MC) imaging, for RVO recognition. It can obtain four multiple modal images with different wavelengths to provide much richer information about retinal features. Since the MC images contain local and global pathologies at multiple scales, a multiscale attention structure is proposed to recognize RVO. In simple terms, this structure uses Resnet as the backbone network for feature extraction, with simultaneous input of images in four modalities. Then, the feature maps at different scales are fed into an attention module to fuse the global and local features, which combines two attention mechanisms, the channel attention mechanism and the spatial attention mechanism. The extensive experimental results demonstrate that our proposed framework achieves quite promising classification performance on the fundus diseases and normal images.","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 6","pages":"2012-2022"},"PeriodicalIF":3.3,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71988192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}