International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
Semi-Supervised Medical Image Segmentation Based on Feature Similarity and Multi-Level Information Fusion Consistency
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-13 DOI: 10.1002/ima.70009
Jianwu Long, Jiayin Liu, Chengxin Yang
{"title":"Semi-Supervised Medical Image Segmentation Based on Feature Similarity and Multi-Level Information Fusion Consistency","authors":"Jianwu Long,&nbsp;Jiayin Liu,&nbsp;Chengxin Yang","doi":"10.1002/ima.70009","DOIUrl":"https://doi.org/10.1002/ima.70009","url":null,"abstract":"<div>\u0000 \u0000 <p>Semantic segmentation is a key task in computer vision, with medical image segmentation as a prominent downstream application that has seen significant advancements in recent years. However, the challenge of requiring extensive annotations in medical image segmentation remains exceedingly difficult. In addressing this issue, semi-supervised semantic segmentation has emerged as a new approach to mitigate annotation burdens. Nonetheless, existing methods in semi-supervised medical image segmentation still face challenges in fully exploiting unlabeled data and efficiently integrating labeled and unlabeled data. Therefore, this paper proposes a novel network model—feature similarity multilevel information fusion network (FSMIFNet). First, the feature similarity module is introduced to harness deep feature similarity among unlabeled images, predicting true label constraints and guiding segmentation features with deep feature relationships. This approach fully exploits deep feature information from unlabeled data. Second, the multilevel information fusion framework integrates labeled and unlabeled data to enhance segmentation quality in unlabeled images, ensuring consistency between original and feature maps for comprehensive optimization of detail and global information. In the ACDC dataset, our method achieves an mDice of 0.684 with 5% labeled data, 0.873 with 10%, 0.884 with 20%, and 0.897 with 50%. Experimental results demonstrate the effectiveness of FSMIFNet in semi-supervised semantic segmentation of medical images, outperforming existing methods on public benchmark datasets. The code and models are available at https://github.com/liujiayin12/FSMIFNet.git.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced Image Enhancement and a Lightweight Feature Pyramid Network for Detecting Microaneurysms in Diabetic Retinopathy Screening
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-09 DOI: 10.1002/ima.70004
Muhammad Zeeshan Tahir, Xingzheng Lyu, Muhammad Nasir, Sanyuan Zhang
{"title":"Advanced Image Enhancement and a Lightweight Feature Pyramid Network for Detecting Microaneurysms in Diabetic Retinopathy Screening","authors":"Muhammad Zeeshan Tahir,&nbsp;Xingzheng Lyu,&nbsp;Muhammad Nasir,&nbsp;Sanyuan Zhang","doi":"10.1002/ima.70004","DOIUrl":"https://doi.org/10.1002/ima.70004","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy (DR) is a complication of diabetes that can lead to vision impairment and even permanent blindness. The increasing number of diabetic patients and a shortage of ophthalmologists highlight the need for automated screening tools for early detection. Microaneurysms (MAs) are the earliest indicators of DR. However, detecting MAs in fundus images is a challenging task due to its small size and subtle features. Additionally, low contrast, noise, and lighting variations in fundus images, such as glare and shadows, further complicate the detection process. To address these challenges, we incorporated image enhancement techniques such as green channel utilization, gamma correction, and median filtering to improve image quality. Furthermore, to enhance the performance of the MA detection model, we employed a lightweight feature pyramid network (FPN) with a pretrained ResNet34 backbone to capture multiscale features and the convolutional block attention module (CBAM) to enhance feature selection. CBAM applies spatial and channel-wise attention, which allows the model to focus on the most relevant features for improved detection. We evaluated our method on the IDRID and E-ophtha datasets, achieving a sensitivity of 0.607 and F1 score of 0.681 on IDRID and a sensitivity of 0.602 and F1 score of 0.650 on E-ophtha. These experimental results show that our proposed method gives better results than previous methods.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Diagnostic Precision in Breast Cancer Classification Through EfficientNetB7 Using Advanced Image Augmentation and Interpretation Techniques
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-08 DOI: 10.1002/ima.70000
T. R. Mahesh, Surbhi Bhatia Khan, Kritika Kumari Mishra, Saeed Alzahrani, Mohammed Alojail
{"title":"Enhancing Diagnostic Precision in Breast Cancer Classification Through EfficientNetB7 Using Advanced Image Augmentation and Interpretation Techniques","authors":"T. R. Mahesh,&nbsp;Surbhi Bhatia Khan,&nbsp;Kritika Kumari Mishra,&nbsp;Saeed Alzahrani,&nbsp;Mohammed Alojail","doi":"10.1002/ima.70000","DOIUrl":"https://doi.org/10.1002/ima.70000","url":null,"abstract":"<div>\u0000 \u0000 <p>The precise classification of breast ultrasound images into benign, malignant, and normal categories represents a critical challenge in medical diagnostics, exacerbated by subtle interclass variations and the variable quality of clinical imaging. State-of-the-art approaches largely capitalize on the advanced capabilities of deep convolutional neural networks (CNNs), with significant emphasis on exploiting architectures like EfficientNet that are pre-trained on extensive datasets. While these methods demonstrate potential, they frequently suffer from overfitting, reduced resilience to image distortions such as noise and artifacts, and the presence of pronounced class imbalances in training data. To address these issues, this study introduces an optimized framework using the EfficientNetB7 architecture, enhanced by a targeted augmentation strategy. This strategy employs aggressive random rotations, color jittering, and horizontal flipping to specifically bolster the representation of minority classes, thereby improving model robustness and generalizability. Additionally, this approach integrates an adaptive learning rate scheduler and implements strategic early stopping to refine the training process and prevent overfitting. This optimized model demonstrates a substantial improvement in diagnostic accuracy, achieving a 98.29% accuracy rate on a meticulously assembled test dataset. This performance significantly surpasses existing benchmarks in the field, highlighting the model's enhanced ability to navigate the intricacies of breast ultrasound image analysis. The high diagnostic accuracy of this model positions it as an invaluable tool in the early detection and informed management of breast cancer, potentially transforming current paradigms in oncological care.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTGWNN: A Multi-Template Graph Wavelet Neural Network Identification Model for Autism Spectrum Disorder
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-08 DOI: 10.1002/ima.70010
Shengchang Shan, Yijie Ren, Zhuqing Jiao, Xiaona Li
{"title":"MTGWNN: A Multi-Template Graph Wavelet Neural Network Identification Model for Autism Spectrum Disorder","authors":"Shengchang Shan,&nbsp;Yijie Ren,&nbsp;Zhuqing Jiao,&nbsp;Xiaona Li","doi":"10.1002/ima.70010","DOIUrl":"https://doi.org/10.1002/ima.70010","url":null,"abstract":"<div>\u0000 \u0000 <p>Functional magnetic resonance imaging (fMRI) has been widely applied in studying various brain disorders. However, current studies typically model regions of interest (ROIs) in brains with a single template. This approach generally examines only the connectivity between ROIs to identify autism spectrum disorder (ASD), ignoring the structural features of the brain. This study proposes a multi-template graph wavelet neural network (GWNN) identification model for ASD called MTGWNN. First, the brain is segmented with multiple templates and the BOLD time series are extracted from fMRI data to construct brain networks. Next, a graph attention network (GAT) is applied to automatically learn interactions between nodes, capturing local information in the node features. These features are then further processed by a convolutional neural network (CNN) to learn global connectivity representations and achieve feature dimensionality reduction. Finally, the features and phenotypic data from each subject are integrated by GWNN to identify ASD at the optimal scale. Experimental results indicate that MTGWNN outperforms the comparative models. Testing on the public dataset ABIDE-I achieved an accuracy (ACC) of 87.25% and an area under the curve (AUC) of 92.49%. MTGWNN effectively integrates brain network features from multiple templates, providing a more comprehensive characterization of brain abnormalities in patients with ASD. It incorporates population information from phenotypic data, which helps to compensate for the limited sample size of individual patients and improves the robustness and generalization of ASD identification.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion-Based Causality-Preserving Neural Network for Dementia Recognition 用于识别痴呆症的基于扩散的因果性保留神经网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-07 DOI: 10.1002/ima.70005
Saqib Mamoon, Zhengwang Xia, Amani Alfakih, Jianfeng Lu
{"title":"Diffusion-Based Causality-Preserving Neural Network for Dementia Recognition","authors":"Saqib Mamoon,&nbsp;Zhengwang Xia,&nbsp;Amani Alfakih,&nbsp;Jianfeng Lu","doi":"10.1002/ima.70005","DOIUrl":"https://doi.org/10.1002/ima.70005","url":null,"abstract":"<div>\u0000 \u0000 <p>Analyzing large-scale functional brain networks for brain disorders often relies on undirected correlations in activation signals between brain regions. While focusing on co-occurring activations, this approach overlooks the potential for directionality inherent in brain connectivity. Established research indicates the causal nature of brain networks, suggesting that activation patterns co-occur and potentially influence one another. To this end, we propose a novel dffusion vector auto-regressive (Diff-VAR) method, enabling the assessment of whole-brain effective connectivity (EC) as a directed and weighted network by integrating the search objectives into the deep neural network model as learnable parameters. The EC learned by our method identifies widespread differences in flow of influence within the brain network for individuals with impaired brain function compared to those with normal brain function. Moreover, we introduce an adaptive smoothing mechanism to enhance the stability and reliability of inferred EC. We evaluated the results of our proposed method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The model's performance is compared with existing correlation-based and causality-based methods. The results revealed that the brain networks constructed by our method achieve high classification accuracy and exhibit features consistent with physiological mechanisms. The code is available at https://github.com/SaqibMamoon/Diff-VAR.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCFU-Net: Rethinking an Effective Attention and Convolutional Architecture for Retinal Vessel Segmentation
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-02 DOI: 10.1002/ima.70003
Yongli Xian, Guangxin Zhao, Xuejian Chen, Congzheng Wang
{"title":"DCFU-Net: Rethinking an Effective Attention and Convolutional Architecture for Retinal Vessel Segmentation","authors":"Yongli Xian,&nbsp;Guangxin Zhao,&nbsp;Xuejian Chen,&nbsp;Congzheng Wang","doi":"10.1002/ima.70003","DOIUrl":"https://doi.org/10.1002/ima.70003","url":null,"abstract":"<div>\u0000 \u0000 <p>Morphological changes in retinal vessels are early indicators of cardiovascular and various fundus diseases. However, accurately segmenting thin blood vessels remains a challenge due to the complexity of the vascular structure and the irregularity of pathological features. This paper proposes a dual chain fusion U-Net (DCFU-Net) for the precise segmentation of retinal vessels. The network consists of a multi-level segmentation network and a fusion network. The multi-level segmentation network is designed with a dual chain architecture to generate segmentation results for both thick and thin vessels simultaneously. The fusion network combines the segmented thin and thick vessels with the original image, facilitating the generation of accurate segmentation outcomes. Notably, traditional convolution structures in the DCFU-Net are replaced by dynamic snake convolutions (DS-Conv). DS-Conv is designed to adaptively focus on slender and tortuous local features, accurately capturing vascular structures. The shared weight residual block, integrating DS-Conv and residual structures, which is called DS-Res block. It serves as the backbone of the DCFU-Net, enhancing feature extraction capabilities, while significantly reducing computational resource consumption. Additionally, this paper rethinks effective components of the Transformer architecture, identifying the inverted residual mobile block (IRMB) as a key element. By extending the DS-Conv-based IRMB into effective attention-based (EAB) blocks, the network mitigates the loss of semantic information, thereby addressing inherent limitations. The DCFU-Net is evaluated on three publicly available datasets: DRIVE, STARE, and CHASE_DB1. Qualitative and quantitative analyses demonstrate that the segmentation results of DCFU-Net outperform state-of-the-art methods.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142762083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Multimodal Xception Network for Glioma Grading Using MRI Images 利用磁共振成像对胶质瘤进行分级的轻量级多模态感知网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-25 DOI: 10.1002/ima.70001
Yu Liang, Dongjie Li, Jiaxin Ren, Weida Gao, Yao Zhang
{"title":"A Lightweight Multimodal Xception Network for Glioma Grading Using MRI Images","authors":"Yu Liang,&nbsp;Dongjie Li,&nbsp;Jiaxin Ren,&nbsp;Weida Gao,&nbsp;Yao Zhang","doi":"10.1002/ima.70001","DOIUrl":"https://doi.org/10.1002/ima.70001","url":null,"abstract":"<div>\u0000 \u0000 <p>Gliomas are the most common type of primary brain tumors, classified into low-grade gliomas (LGGs) and high-grade gliomas (HGGs). There is a significant difference in survival rates between patients with different grades of gliomas, making imaging-based grading a research hotspot. Current deep learning–based glioma grading algorithms face challenges, such as network complexity, low accuracy, and difficulty in large-scale application. This paper proposes a multimodal, lightweight Xception grading network to address these issues. The network introduces convolutional block attention modules and employs dilated convolutions for spatial feature aggregation, reducing parameter count while maintaining the same receptive field. By integrating spatial and channel squeeze-and-excitation modules, it achieves more accurate feature learning, alongside improvements to the residual connection modules for critical information retention. Compared to existing methods, the proposed approach improves classification accuracy while maintaining a reduced parameter count. The network was trained and validated on 344 glioma cases (261 HGGs and 83 LGGs) and tested on 38 glioma cases (29 HGGs and 9 LGGs). Experimental results demonstrate that the network achieves an accuracy of 92.67% and an AUC of 0.9413 using a fully connected layer as the classifier. The features extracted using the improved Xception grading network achieved an accuracy of 93.42% when classified with KNN and RF classifiers. This study aims to provide diagnostic suggestions for clinical use through a simple, effective, and noninvasive multimodal medical imaging diagnostic method for LGG/HGG grading, thereby accelerating treatment decision-making.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142708291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling Cancer: A Data-Driven Approach for Early Identification and Prediction Using F-RUS-RF Model 揭开癌症的面纱:利用 F-RUS-RF 模型进行早期识别和预测的数据驱动方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-20 DOI: 10.1002/ima.23221
Ashir Javeed, Peter Anderberg, Muhammad Asim Saleem, Ahmad Nauman Ghazi, Johan Sanmartin Berglund
{"title":"Unveiling Cancer: A Data-Driven Approach for Early Identification and Prediction Using F-RUS-RF Model","authors":"Ashir Javeed,&nbsp;Peter Anderberg,&nbsp;Muhammad Asim Saleem,&nbsp;Ahmad Nauman Ghazi,&nbsp;Johan Sanmartin Berglund","doi":"10.1002/ima.23221","DOIUrl":"https://doi.org/10.1002/ima.23221","url":null,"abstract":"<p>Globally, cancer is the second-leading cause of death after cardiovascular disease. To improve survival rates, risk factors and cancer predictors must be identified early. From the literature, researchers have developed several kinds of machine learning-based diagnostic systems for early cancer prediction. This study presented a diagnostic system that can identify the risk factors linked to the onset of cancer in order to anticipate cancer early. The newly constructed diagnostic system consists of two modules: the first module relies on a statistical F-score method to rank the variables in the dataset, and the second module deploys the random forest (RF) model for classification. Using a genetic algorithm, the hyperparameters of the RF model were optimized for improved accuracy. A dataset including 10 765 samples with 74 variables per sample was gathered from the Swedish National Study on Aging and Care (SNAC). The acquired dataset has a bias issue due to the extreme imbalance between the classes. In order to address this issue and prevent bias in the newly constructed model, we balanced the classes using a random undersampling strategy. The model's components are integrated into a single unit called F-RUS-RF. With a sensitivity of 92.25% and a specificity of 85.14%, the F-RUS-RF model achieved the highest accuracy of 86.15%, utilizing only six highly ranked variables according to the statistical F-score approach. We can lower the incidence of cancer in the aging population by addressing the risk factors for cancer that the F-RUS-RF model found.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23221","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142692088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CATNet: A Cross Attention and Texture-Aware Network for Polyp Segmentation CATNet:用于息肉分割的交叉注意和纹理感知网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-16 DOI: 10.1002/ima.23220
Zhifang Deng, Yangdong Wu
{"title":"CATNet: A Cross Attention and Texture-Aware Network for Polyp Segmentation","authors":"Zhifang Deng,&nbsp;Yangdong Wu","doi":"10.1002/ima.23220","DOIUrl":"https://doi.org/10.1002/ima.23220","url":null,"abstract":"<div>\u0000 \u0000 <p>Polyp segmentation is a challenging task, as some polyps exhibit similar textures to surrounding tissues, making them difficult to distinguish. Therefore, we present a parallel cross-attention and texture-aware network to address this challenging task. CATNet incorporates the parallel cross-attention mechanism, Residual Feature Fusion Module, and texture-aware module. Initially, polyp images undergo processing in our backbone network to extract multi-level polyp features. Subsequently, the parallel cross-attention mechanism sequentially captures channel and spatial dependencies across multi-scale polyp features, thereby yielding enhanced representations. These enhanced representations are then input into multiple texture-aware modules, which facilitate polyp segmentation by accentuating subtle textural disparities between polyps and the background. Finally, the Residual Feature Fusion module integrates the segmentation results with the previous layer of enhanced representations. This process serves to eliminate background noise and enhance intricate details. We assess the efficacy of our proposed method across five distinct polyp datasets. On three unseen datasets, CVC-300, CVC-ColonDB, and ETIS. We achieve mDice scores of 0.916, 0.817, and 0.777, respectively. Experimental results unequivocally demonstrate the superior performance of our approach over current models. The proposed CATNet addresses the challenges posed by textural similarities, setting a benchmark for future advancements in automated polyp detection and segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting the Early Detection of Breast Cancer Using Hybrid Machine Learning Systems and Thermographic Imaging 利用混合机器学习系统和热成像技术预测乳腺癌的早期发现
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-16 DOI: 10.1002/ima.23211
Mohammad Mehdi Hosseini, Zahra Mosahebeh, Somenath Chakraborty, Abdorreza Alavi Gharahbagh
{"title":"Predicting the Early Detection of Breast Cancer Using Hybrid Machine Learning Systems and Thermographic Imaging","authors":"Mohammad Mehdi Hosseini,&nbsp;Zahra Mosahebeh,&nbsp;Somenath Chakraborty,&nbsp;Abdorreza Alavi Gharahbagh","doi":"10.1002/ima.23211","DOIUrl":"https://doi.org/10.1002/ima.23211","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer is a leading cause of mortality among women, emphasizing the critical need for precise early detection and prognosis. However, conventional methods often struggle to differentiate precancerous lesions or tailor treatments effectively. Thermal imaging, capturing subtle temperature variations, presents a promising avenue for non-invasive cancer detection. While some studies explore thermography for breast cancer detection, integrating it with advanced machine learning for early diagnosis and personalized prediction remains relatively unexplored. This study proposes a novel hybrid machine learning system (HMLS) incorporating deep autoencoder techniques for automated early detection and prognostic stratification of breast cancer patients. By exploiting the temporal dynamics of thermographic data, this approach offers a more comprehensive analysis than static single-frame approaches. Data processing involves splitting the dataset for training and testing. A predominant infrared image was selected, and matrix factorization was applied to capture temperature changes over time. Integration of convex factor analysis and bell-curve membership function embedding for dimensionality reduction and feature extraction. The autoencoder deep neural network further reduces dimensionality. HMLS model development included feature selection and optimization of survival prediction algorithms through cross-validation. Model performance was assessed using accuracy and F-measure metrics. HMLS, integrating clinical data, achieved 81.6% accuracy, surpassing 77.6% using only convex-NMF. The best classifier attained 83.2% accuracy on test data. This study demonstrates the effectiveness of thermographic imaging and HMLS for accurate early detection and personalized prediction of breast cancer. The proposed framework holds promise for enhancing patient care and potentially reducing mortality rates.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信