Shu Cai, Qiude Zhang, Shanshan Wang, Junjie Hu, Liang Zeng, Kaiyan Li
{"title":"Interactive CNN and Transformer-Based Cross-Attention Fusion Network for Medical Image Classification","authors":"Shu Cai, Qiude Zhang, Shanshan Wang, Junjie Hu, Liang Zeng, Kaiyan Li","doi":"10.1002/ima.70077","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Medical images typically contain complex structures and abundant detail, exhibiting variations in texture, contrast, and noise across different imaging modalities. Different types of images contain both local and global features with varying expressions and importance, making accurate classification highly challenging. Convolutional neural network (CNN)-based approaches are limited by the size of the convolutional kernel, which restricts their ability to capture global contextual information effectively. In addition, while transformer-based models can compensate for the limitations of convolutional neural networks by modeling long-range dependencies, they are difficult to extract fine-grained local features from images. To address these issues, we propose a novel architecture, the Interactive CNN and Transformer for Cross Attention Fusion Network (IFC-Net). This model leverages the strengths of CNNs for efficient local feature extraction and transformers for capturing global dependencies, enabling it to preserve local features and global contextual relationships. Additionally, we introduce a cross-attention fusion module that adaptively adjusts the feature fusion strategy, facilitating efficient integration of local and global features and enabling dynamic information exchange between the CNN and transformer components. Experimental results on four benchmark datasets, ISIC2018, COVID-19, and liver cirrhosis (line array, convex array), demonstrate that the proposed model achieves superior classification performance, outperforming both CNN and transformer-only architectures.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.70077","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Medical images typically contain complex structures and abundant detail, exhibiting variations in texture, contrast, and noise across different imaging modalities. Different types of images contain both local and global features with varying expressions and importance, making accurate classification highly challenging. Convolutional neural network (CNN)-based approaches are limited by the size of the convolutional kernel, which restricts their ability to capture global contextual information effectively. In addition, while transformer-based models can compensate for the limitations of convolutional neural networks by modeling long-range dependencies, they are difficult to extract fine-grained local features from images. To address these issues, we propose a novel architecture, the Interactive CNN and Transformer for Cross Attention Fusion Network (IFC-Net). This model leverages the strengths of CNNs for efficient local feature extraction and transformers for capturing global dependencies, enabling it to preserve local features and global contextual relationships. Additionally, we introduce a cross-attention fusion module that adaptively adjusts the feature fusion strategy, facilitating efficient integration of local and global features and enabling dynamic information exchange between the CNN and transformer components. Experimental results on four benchmark datasets, ISIC2018, COVID-19, and liver cirrhosis (line array, convex array), demonstrate that the proposed model achieves superior classification performance, outperforming both CNN and transformer-only architectures.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.