International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
Improving Image Quality of Thin-Slice and Low-keV Images in Dual-Energy CT Angiography for Children With Neuroblastoma Using Deep Learning Image Reconstruction 应用深度学习图像重建提高儿童神经母细胞瘤双能CT血管造影薄层低分辨率图像质量
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-13 DOI: 10.1002/ima.70143
Jihang Sun, Haoyan Li, Shen Yang, Ruifang Sun, Fanning Wang, Zhenpeng Chen, Yun Peng
{"title":"Improving Image Quality of Thin-Slice and Low-keV Images in Dual-Energy CT Angiography for Children With Neuroblastoma Using Deep Learning Image Reconstruction","authors":"Jihang Sun,&nbsp;Haoyan Li,&nbsp;Shen Yang,&nbsp;Ruifang Sun,&nbsp;Fanning Wang,&nbsp;Zhenpeng Chen,&nbsp;Yun Peng","doi":"10.1002/ima.70143","DOIUrl":"https://doi.org/10.1002/ima.70143","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <p>Neuroblastoma (NB) is a common malignant tumor in children, and the evaluation of vascular involvement image-defined risk factors (IDRFs) using computed tomography angiography (CTA) is crucial for prognostic assessment. To evaluate whether deep learning image reconstruction (DLIR) can improve the image quality of thin-slice, low-keV images in dual-energy CTA (DECTA) and provide a more accurate assessment of IDRFs in children with NB. Forty-three NB patients (median age: 2 years., 6 months to 7 years), who underwent chest or abdominal DECTA, were included. The 0.625 mm slice thickness images at 40 keV were reconstructed using high-strength DLIR (40 keV-DL-0.6 mm) in the study group. The 0.625 mm images at 40 keV and 5 mm images at 68 keV, reconstructed using the adaptive statistical iterative reconstruction-V (ASIR-V) with a strength of 50% (40 keV-AV-0.6 mm,68 keV-AV-5 mm, respectively), served as the control group. Objective measurements included the contrast-to-noise ratio (CNR) and edge-rise slope (ERS) of the aorta, and magnitude of noise power spectrum (NPS) of the liver. Subjective image quality was assessed using a 5-point scale to evaluate overall image noise, image contrast, and the visualization of large and small arteries. The IDRFs were also evaluated across all images. In general, the 0.625-mm images had higher spatial resolution and more confident IDRF assessment compared to the 5-mm images. The 40 keV-DL-0.6-mm images demonstrated the highest CNR and ERS of large vessels, and the best visualization of small arteries among the three image groups (all <i>p</i> &lt; 0.05). Subjective assessments revealed that only the 40 keV-DL-0.6 mm images met diagnostic requirements for overall noise, image contrast, large artery, and small artery visualization simultaneously. DLIR-H significantly improves the image quality of the thin-slice and low-keV images in DECTA for pediatric NB patients, enabling improved visualization of small arteries and more accurate assessment of vascular involvement IDRFs in NB.</p>\u0000 </section>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144612001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Multimodal Fusion Method for Depression Detection Using Graph Convolutional Neural Networks 一种基于图卷积神经网络的多模态融合抑郁症检测方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-13 DOI: 10.1002/ima.70142
Yan Xing, Yanan Yang, Min Hu, Daolun Li, Wenshu Zha, Henry Han
{"title":"A Novel Multimodal Fusion Method for Depression Detection Using Graph Convolutional Neural Networks","authors":"Yan Xing,&nbsp;Yanan Yang,&nbsp;Min Hu,&nbsp;Daolun Li,&nbsp;Wenshu Zha,&nbsp;Henry Han","doi":"10.1002/ima.70142","DOIUrl":"https://doi.org/10.1002/ima.70142","url":null,"abstract":"<div>\u0000 \u0000 <p>Depression is a debilitating mental disorder affecting more than 350 million individuals globally. Therefore, it is imperative to develop an efficient automated depression detection model to aid in clinical diagnosis. Nonetheless, current approaches fail to exploit the connections between data from various modalities fully. To address this issue, we introduce a multimodal fusion model that receives textual, audio, and visual data as input for depression detection. Specifically, BERT and Bi-LSTM are employed for extracting textual features, and Bi-LSTM is applied to capture audio and visual characteristics. Subsequently, a deep graph convolutional neural network is employed to amalgamate features from all three modalities effectively. Our model can extract depression-related information across multiple modalities and effectively fuse this data. Our experiments on the DAIC-WOZ dataset yielded an F1-score of 96.30%, surpassing the performance of other advanced techniques, thus demonstrating the effectiveness of our proposed approach.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144612000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pneumonia Image Classification Based on Lightweight Mobile ViT Networks 基于轻量级移动ViT网络的肺炎图像分类
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-13 DOI: 10.1002/ima.70153
Zhiqiang Zheng, Enhe Liang, Zhi Weng, Jianxiu Li, Yujie Zhang
{"title":"Pneumonia Image Classification Based on Lightweight Mobile ViT Networks","authors":"Zhiqiang Zheng,&nbsp;Enhe Liang,&nbsp;Zhi Weng,&nbsp;Jianxiu Li,&nbsp;Yujie Zhang","doi":"10.1002/ima.70153","DOIUrl":"https://doi.org/10.1002/ima.70153","url":null,"abstract":"<div>\u0000 \u0000 <p>Pneumonia is a common respiratory disease, and accurate image classification is crucial for early diagnosis and treatment. However, traditional deep learning models limit their application in the field of medical image processing due to their inherent complex structure; at the same time, they face the challenges of complex models and high computational resource requirements. To this end, we propose a new lightweight Mobile Vit-based network to minimize the model parameters, complexity, and training time while ensuring the original accuracy. The FSConv module in this model consists of Fire and Channel Shuffle, which can better utilize the correlation between features and improve the effectiveness of feature representation while keeping the model lightweight. In addition, to address the shortcoming of large information loss in extracting local features, we use an improved ShuffleNet model, which can effectively extract local features and make the information loss in the feature fusion process relatively small. The reliability and effectiveness of the proposed Mobile Vit-mix network are confirmed by the ablation experiments and comparative experiments conducted under the publicly available pneumonia datasets (Sait et al. and ChestXRay 2017). The experimental results show that our approach outperforms the current good lightweight classification network models in terms of model parameter count and complexity, which are 78% and 41% higher, respectively; in addition, its model accuracy and classification performance are improved by 0.7% compared to the baseline model. Therefore, the design of this study realizes efficient pneumonia image classification in resource-constrained environments.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144615212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Interpretability in Breast Cancer Detection: Combining Grad-CAM With Selective Layer Freezing in Deep Learning 增强乳腺癌检测的可解释性:结合深度学习中的Grad-CAM和选择性层冻结
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-10 DOI: 10.1002/ima.70151
Shabnam Jafarpoor Nesheli, Maryam Sabet, Vesal Firoozi, Sahel Heydarheydari, Seyed Masoud Rezaeijo
{"title":"Enhanced Interpretability in Breast Cancer Detection: Combining Grad-CAM With Selective Layer Freezing in Deep Learning","authors":"Shabnam Jafarpoor Nesheli,&nbsp;Maryam Sabet,&nbsp;Vesal Firoozi,&nbsp;Sahel Heydarheydari,&nbsp;Seyed Masoud Rezaeijo","doi":"10.1002/ima.70151","DOIUrl":"https://doi.org/10.1002/ima.70151","url":null,"abstract":"<div>\u0000 \u0000 <p>This study aims to develop a novel deep learning-based approach that integrates selective layer freezing, cyclic learning rate scheduling, and Grad-CAM visualization to address the challenges of class imbalance, limited interpretability, and adaptability in breast cancer detection from mammographic images. The proposed framework utilized ResNet50 and VGG19 architectures, fine-tuned with selective layer freezing to optimize the balance between general feature preservation and domain-specific adaptation. Mammographic images comprising 8398 images (4194 malignant and 4204 benign) were preprocessed using resizing, histogram equalization, normalization, and data augmentation to enhance feature extraction and mitigate class imbalance. The dataset was divided into training, validation, and test sets (80:15:5), with an additional 136 external mammograms included for validation. Grad-CAM was applied to provide visual interpretability by highlighting diagnostic regions such as abnormal masses and architectural distortions. Performance was evaluated using metrics such as accuracy, precision, recall, F1-score, and AUC. The ResNet50 model achieved an AUC of 0.97 across all freezing ratios, with the 50% freezing ratio delivering the most balanced performance (accuracy: 97%, precision: 97%, recall: 97%). In comparison, the VGG19 model achieved a maximum AUC of 0.95 at the 50% freezing ratio. Grad-CAM outputs confirmed the interpretability of the models, with sharp and clinically relevant visualizations provided by ResNet50. External validation further demonstrated the robustness and generalizability of the proposed framework. The proposed framework effectively combines high diagnostic accuracy with enhanced interpretability, making it a valuable tool for breast cancer detection. Future work will focus on multi-class classification and large-scale clinical validation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144589594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TDGU-Net: A Hybrid CNN-Transformer Model for Intracranial Aneurysm Segmentation TDGU-Net:用于颅内动脉瘤分割的CNN-Transformer混合模型
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-09 DOI: 10.1002/ima.70157
Xiaoqing Lin, Chen Wang, Zhengkui Chen, Jianwei Pan, Jijun Tong
{"title":"TDGU-Net: A Hybrid CNN-Transformer Model for Intracranial Aneurysm Segmentation","authors":"Xiaoqing Lin,&nbsp;Chen Wang,&nbsp;Zhengkui Chen,&nbsp;Jianwei Pan,&nbsp;Jijun Tong","doi":"10.1002/ima.70157","DOIUrl":"https://doi.org/10.1002/ima.70157","url":null,"abstract":"<div>\u0000 \u0000 <p>Intracranial aneurysms are life-threatening cerebrovascular conditions, and their accurate identification is crucial for early diagnosis and treatment planning. Automated segmentation technology plays a key role in enhancing diagnostic accuracy and enabling timely intervention. However, the segmentation task is challenging due to the diverse morphologies of aneurysms, indistinct boundaries, and their resemblance to adjacent vascular structures. This study introduces TDGU-Net, a deep learning-based method that combines Convolutional Neural Networks (CNNs) with Transformer architecture to improve segmentation accuracy and efficiency. The model uses CNNs for efficient local feature extraction, while Transformer blocks are employed to establish global relationships within local regions, enhancing the model's ability to capture contextual dependencies. Furthermore, a multi-scale feature fusion module is incorporated to capture critical information across different resolutions, and the Attention Gate mechanism is used to improve the model's ability to accurately identify aneurysm regions. The proposed model was evaluated on the Large IA Segmentation dataset and further validated on the MICCAI 2020 ADAM dataset to demonstrate its adaptability to different datasets. It achieved a Dice coefficient of 76.92% and a sensitivity of 79.65%, demonstrating robust segmentation performance and accurate detection of aneurysms. The proposed method provides a promising tool for the automated diagnosis of intracranial aneurysms, with significant potential for clinical application and improving patient outcomes.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144582381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Framework for Lung Disease Classification Using Multiscale Convolutional Neural Networks With an Integrated Dynamic Attention Mechanism 基于集成动态注意机制的多尺度卷积神经网络的肺部疾病分类新框架
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-08 DOI: 10.1002/ima.70155
Vivekanand Thakare, Shailendra S. Aote, Abhijeet Raipurkar
{"title":"A Novel Framework for Lung Disease Classification Using Multiscale Convolutional Neural Networks With an Integrated Dynamic Attention Mechanism","authors":"Vivekanand Thakare,&nbsp;Shailendra S. Aote,&nbsp;Abhijeet Raipurkar","doi":"10.1002/ima.70155","DOIUrl":"https://doi.org/10.1002/ima.70155","url":null,"abstract":"<div>\u0000 \u0000 <p>Lung disease diagnosis remains a significant clinical challenge due to the similarity in radiological features across various conditions such as COPD, pneumonia, tuberculosis, COVID-19, and lung cancer. Manual interpretation of chest CT scans is time-consuming and subject to inter-observer variability, particularly in resource-limited settings. To address these challenges, this study proposes a novel deep learning framework Multiscale Convolutional Neural Networks with Attention Mechanism (MCNN-AM) for automated classification of lung diseases into six categories, including normal lungs. The model leverages multiscale convolutional layers to extract both localized and global features, enabling better discrimination between diseases with overlapping characteristics. A dynamic attention mechanism, comprising both spatial and channel attention modules, is integrated to emphasize disease-relevant regions and suppress background noise, enhancing the model's diagnostic focus. Additionally, depthwise separable convolutions are utilized to reduce computational complexity while preserving feature richness. The MCNN-AM model is trained and evaluated on publicly available datasets, comprising 6000 training images and 1200 testing images equally distributed across all classes. The model achieves a classification accuracy of 96.84%, outperforming state-of-the-art models such as ResNet50, DenseNet121, and InceptionV3 in terms of precision, recall, F1-score, sensitivity, and specificity. Ablation studies further validate the critical role of the attention modules in achieving high performance. These results demonstrate the potential of MCNN-AM as a reliable, scalable tool for computer-aided diagnosis of lung diseases.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144573892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Assessment of CNN and Transformer U-Nets in Multiple Sclerosis Lesion Segmentation CNN与Transformer U-Nets在多发性硬化症病灶分割中的比较评价
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-07 DOI: 10.1002/ima.70146
Beytullah Sarica, Yunus Serhat Bicakci, Dursun Zafer Seker
{"title":"Comparative Assessment of CNN and Transformer U-Nets in Multiple Sclerosis Lesion Segmentation","authors":"Beytullah Sarica,&nbsp;Yunus Serhat Bicakci,&nbsp;Dursun Zafer Seker","doi":"10.1002/ima.70146","DOIUrl":"https://doi.org/10.1002/ima.70146","url":null,"abstract":"<div>\u0000 \u0000 <p>Multiple sclerosis (MS) is a chronic autoimmune disease that causes lesions in the central nervous system. Accurate segmentation and quantification of these lesions are essential to monitor disease progression and evaluate treatments. Several architectures are used for such studies, the most popular being U-Net-based models. Therefore, this study compares CNN-based and Transformer-based U-Net architectures for MS lesion segmentation. Six U-Net architectures based on CNN and transformer, namely U-Net, R2U-Net, V-Net, Attention U-Net, TransUNet, and SwinUNet, were trained and evaluated on two MS datasets, ISBI2015 and MSSEG2016. T1-w, T2-w, and FLAIR sequences were jointly used to obtain more detailed features. A hybrid loss function, which involves the addition of focal Tversky and Dice losses, was exploited to improve the performance of models. This study was carried out in three steps. First, each model was trained separately and evaluated in each dataset. Second, each model was trained on the ISBI2015 dataset and evaluated on the MSSEG2016 dataset and vice versa. Finally, these two datasets were combined to increase the training samples and assessed on the ISBI2015 dataset. Accordingly, the R2U-Net and the V-Net models (CNN-based) achieved the best ISBI scores among the other models. The R2U-Net model achieved the best ISBI scores in the first and last steps with average scores of 92.82 and 92.91, while the V-Net model achieved the best ISBI score in the second step with an average score of 91.28. Our results show that CNN-based models surpass the Transformer-based U-Net models in most metrics for MS lesion segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144573525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Computer-Assisted Diabetic Retinopathy Grading: A Super Learner Ensemble Technique for Fundus Imagery 先进的计算机辅助糖尿病视网膜病变分级:眼底图像的超级学习者集成技术
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-07 DOI: 10.1002/ima.70152
Mili Rosline Mathews, S. M. Anzar
{"title":"Advancing Computer-Assisted Diabetic Retinopathy Grading: A Super Learner Ensemble Technique for Fundus Imagery","authors":"Mili Rosline Mathews,&nbsp;S. M. Anzar","doi":"10.1002/ima.70152","DOIUrl":"https://doi.org/10.1002/ima.70152","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy (DR) is a severe complication of diabetes mellitus and is a predominant global cause of blindness. The accuracy of DR grading is of paramount importance to enable timely and appropriate clinical interventions. This study presents an innovative and comprehensive approach to DR grading that combines convolutional neural networks with an ensemble of diverse machine learning algorithms, referred to as a super learner ensemble. Our methodology includes a preprocessing pipeline designed to enhance the quality of the fundus images in the dataset. To further refine DR grading, we introduce a novel feature extraction model named “RetinaXtract” in conjunction with advanced machine learning classifiers. Statistical analysis tools, specifically the Friedman and Nemenyi tests, are employed to identify the most effective machine learning algorithms. Subsequently, a super learner ensemble is devised by integrating the predictions of the highest-performing machine learning algorithms. This ensemble approach captures a wide range of patterns, thereby enhancing the system's ability to accurately distinguish between different DR stages. Notably, accuracy rates of 99.64%, 99.51%, and 99.16% are achieved on the IDRiD, Kaggle, and Messidor datasets, respectively. This research represents a significant contribution to the field of DR grading, offering a balanced, efficient, and precise classification solution. The introduced methodology has demonstrated substantial promise and holds significant potential for practical applications in the detection and grading of DR from fundus images, ultimately leading to improved clinical outcomes in ophthalmology.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144573704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Adaptive Sliding Window Method for fMRI Dynamic Functional Connectivity Analysis 一种新的自适应滑动窗口方法用于fMRI动态功能连接分析
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-05 DOI: 10.1002/ima.70154
Ningfei Jiang, Yuhu Shi
{"title":"A New Adaptive Sliding Window Method for fMRI Dynamic Functional Connectivity Analysis","authors":"Ningfei Jiang,&nbsp;Yuhu Shi","doi":"10.1002/ima.70154","DOIUrl":"https://doi.org/10.1002/ima.70154","url":null,"abstract":"<div>\u0000 \u0000 <p>The fixed-window sliding time window method is widely used in exploring dynamics functional connectivity of functional magnetic resonance imaging data analysis, but it is difficult to select a suitable window to capture the dynamic changes in brain function. Therefore, a local polynomial regression (LPR) method is proposed to fit the region of interest (ROI) time series in this paper, in which observations are locally modeled by a least-squares polynomial with a kernel of a certain bandwidth that allows for better bias-variance tradeoff. It combines a data-driven variable bandwidth selection mechanism with intersection of confidence intervals (ICI) and a bandwidth optimization algorithm of particle swarm optimization (PSO). Among them, ICI is used to adaptively determine the locally optimal bandwidth that minimizes the mean square error (MSE), and then the bandwidth values at various time points within all ROIs are computed for each subject. Subsequently, the averaged bandwidth values at these time points is regarded as the bandwidth value for that subject at each time point, followed by generating a time-varying bandwidth sequence for each subject, which is used in the PSO-based bandwidth optimization algorithm. Finally, the results of experiments conducted on simulated data showed that the LPR–ICI–PSO method exhibited lower MSE values on time-varying correlation coefficient estimation for different noise scenarios. Furthermore, we applied the proposed method to the autism spectrum disorder (ASD) study, and obtained a classification accuracy of 74.1% from typical controls (TC) through support vector machine (SVM) with the 10-fold cross-validation strategy. These results demonstrated that our proposed method can effectively capture the dynamic changes in brain function, which is valid in clinical diagnosis and helps to reveal the differences in brain functional connectivity patterns.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144558207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Brain Tumor Classification Through Optimal Kernel Selection With GL 1 $$ {GL}_1 $$ -Regularization 通过GL 1优化核选择增强脑肿瘤分类$$ {GL}_1 $$ -正则化
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-07-04 DOI: 10.1002/ima.70150
Otmane Mallouk, Nour-Eddine Joudar, Mohamed Ettaouil
{"title":"Enhancing Brain Tumor Classification Through Optimal Kernel Selection With \u0000 \u0000 \u0000 \u0000 GL\u0000 1\u0000 \u0000 \u0000 $$ {GL}_1 $$\u0000 -Regularization","authors":"Otmane Mallouk,&nbsp;Nour-Eddine Joudar,&nbsp;Mohamed Ettaouil","doi":"10.1002/ima.70150","DOIUrl":"https://doi.org/10.1002/ima.70150","url":null,"abstract":"<div>\u0000 \u0000 <p>Brain tumors, known for their rapid and aggressive growth, are among the most serious and life-threatening diseases worldwide. This makes the development of automated detection methods essential for saving lives. Deep transfer learning has become a highly effective approach for automating brain tumor classification and medical imaging, offering promising solutions on a global scale. However, leveraging a pretrained model typically involves special adaptations. Existing adaptation methods involve freezing or fine-tuning specific layers without considering the contribution level of individual kernels. This work aims to extend the concept of layer-level contributions to the kernel level by employing an adaptive optimization model. Indeed, this paper presents a novel optimization model that incorporates group lasso regularization to control which kernels are frozen and which are fine-tuned. The proposed model selects optimal source features that contribute to the target task. Additionally, the proposed optimization model is solved utilizing proximal gradient descent. The method was evaluated on a three-class brain tumor classification task, distinguishing between glioma, meningioma, and pituitary tumors, using a medical MRI dataset. Several experiments confirm the efficacy of our model in identifying both frozen and fine-tuned kernels, thereby improving data classification. Subsequently, the results obtained are compared with those of state-of-the-art transfer learning methods for comprehensive comparison.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信